The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

Overview

SPatchGAN: Official TensorFlow Implementation

Paper

  • "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation" (ICCV 2021)



Environment

  • CUDA 10.0
  • Python 3.6
  • pip install -r requirements.txt

Dataset

  • Dataset structure (dataset_struct='plain')
- dataset
    - <dataset_name>
        - trainA
            - 1.jpg
            - 2.jpg
            - ...
        - trainB
            - 3.jpg
            - 4.jpg
            - ...
        - testA
            - 5.jpg
            - 6.jpg
            - ...
        - testB
            - 7.jpg
            - 8.jpg
            - ...
  • Supported extensions: jpg, jpeg, png
  • An additional level of subdirectories is also supported by setting dataset_struct to 'tree', e.g.,
- trainA
    - subdir1
        - 1.jpg
        - 2.jpg
        - ...
    - subdir2
        - ...
  • Selfie-to-anime:

    • The dataset can be downloaded from U-GAT-IT.
  • Male-to-female and glasses removal:

    • The datasets can be downloaded from Council-GAN.
    • The images must be center cropped from 218x178 to 178x178 before training or testing.
    • For glasses removal, only the male images are used in the experiments in our paper. Note that the dataset from Council-GAN has already been split into two subdirectories, "1" for male and "2" for female.

Training

  • Set the suffix to anything descriptive, e.g., the date.
  • Selfie-to-Anime
python main.py --dataset selfie2anime --augment_type resize_crop --n_scales_dis 3 --suffix scale3_cyc20_20210831 --phase train
  • Male-to-Female
python main.py --dataset male2female --cyc_weight 10 --suffix cyc10_20210831 --phase train
  • Glasses Removal
python main.py --dataset glasses-male --cyc_weight 30 --suffix cyc30_20210831 --phase train
  • Find the output in ./output/SPatchGAN_<dataset_name>_<suffix>
  • The same command can be used to continue training based on the latest checkpoint.
  • For a new task, we recommend to use the default setting as the starting point, and adjust the hyperparameters according to the tips.
  • Check configs.py for all the hyperparameters.

Testing with the latest checkpoint

  • Replace --phase train with --phase test

Save a frozen model (.pb)

  • Replace --phase train with --phase freeze_graph
  • Find the saved frozen model in ./output/SPatchGAN_<dataset_name>_<suffix>/checkpoint/pb

Testing with the frozon model

cd frozen_model
python test_frozen_model.py --image <input_image_or_dir> --output_dir <output_dir> --model <frozen_model_path>

Pretrained Models

  • Download the pretrained models from google drive, and put them in the output directory.
  • You can test the checkpoints (in ./checkpoint) or the frozen models (in ./checkpoint/pb). Either way produces the same results.
  • The results generated by the pretrained models are slightly different from those in the paper, since we have rerun the training after code refactoring.
  • We set n_scales_dis to 3 for the pretrained selfie2anime model to further improve the performance. It was 4 in the paper. See more details in the tips.
  • We also provide the generated results of the last 100 test images (in ./gen, sorted by name, no cherry-picking) for the calibration purpose.

Other Implementations

Citation

@inproceedings{SPatchGAN2021,
  title={SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation},
  author={Xuning Shao and Weidong Zhang},
  booktitle={IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

Acknowledgement

  • Our code is partially based on U-GAT-IT.
You might also like...
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation Created by Zeyu HU Introduction This work is based on our paper VMNet: Voxel-Mes

Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

A PyTorch implementation of
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

source code of “Visual Saliency Transformer” (ICCV2021)
source code of “Visual Saliency Transformer” (ICCV2021)

Visual Saliency Transformer (VST) source code for our ICCV 2021 paper “Visual Saliency Transformer” by Nian Liu, Ni Zhang, Kaiyuan Wan, Junwei Han, an

HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

Comments
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @NetEase-GameAI, @xunings ! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! We currently implemented the selfie-to-anime model, View it here: https://replicate.ai/netease-gameai/spatchgan-selfie2anime

    Do claim your page here so you can own the page, customise the Example gallery as you like, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 4
  • when train, G_loss is nan.

    when train, G_loss is nan.

    When I train SPatchGAN using selfie2anime, G_loss is nan at First step. My hw was RTX3090 x2 and runinning was below. "python main.py --dataset selfie2anime --augment_type resize_crop --suffix 20210825 --phase train --img_size 64 --batch_size 1" I attach my conda env(conda_list.txt) and all messages (problems.txt).

    conda_list.txt problems.txt

    I want to know my mistake and howto fix it. Thank you.

    opened by anth9137 2
Owner
null
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 4, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official code of ICCV2021 paper "Residual Attention: A Simple but Effective Method for Multi-Label Recognition"

CSRA This is the official code of ICCV 2021 paper: Residual Attention: A Simple But Effective Method for Multi-Label Recoginition Demo, Train and Vali

null 163 Dec 22, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

null 109 Dec 29, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022