Council-GAN - Implementation for our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020)

Overview

Council-GAN

Implementation of our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020)

Paper

Ori Nizan , Ayellet Tal, Breaking the Cycle - Colleagues are all you need [Project]

gan_council_teaser

gan_council_overview

male2female_gif

glasses_gif

anime_gif

Temporary Telegram Bot

Send image to this telegram bot and it will send you back its female translation using our implementation

Usage

Install requirements

conda env create -f conda_requirements.yml

Downloading the dataset

Download the selfie to anime dataset:

bash ./scripts/download.sh U_GAT_IT_selfie2anime

Download the celeba glasses removal dataset:

bash ./scripts/download.sh celeba_glasses_removal

Download the celeba male to female dataset:

bash ./scripts/download.sh celeba_male2female

use your on dataset:

├──datasets
    └──DATASET_NAME
        ├──testA
            ├──im1.png
            ├──im2.png
            └── ...
        ├──testB
            ├──im3.png
            ├──im4.png
            └── ...
        ├──trainA
            ├──im5.png
            ├──im6.png
            └── ...
        └──trainB
            ├──im7.png
            ├──im8.png
            └── ...

and change the data_root attribute to ./datasets/DATASET_NAME in the yaml file

Training:

Selfie to anime:

python train.py --config configs/anime2face_council_folder.yaml --output_path ./outputs/council_anime2face_256_256 --resume

Glasses removel:

python train.py --config configs/galsses_council_folder.yaml --output_path ./outputs/council_glasses_128_128 --resume

Male to female:

python train.py --config configs/male2female_council_folder.yaml --output_path ./outputs/male2famle_256_256 --resume

Testing:

for converting all the images in input_folder using all the members in the council:

python test_on_folder.py --config configs/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint ./outputs/council_anime2face_256_256/anime2face_council_folder/checkpoints/01000000 --input_folder ./datasets/selfie2anime/testB --a2b 0

or using spsified memeber:

python test_on_folder.py --config configs/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint ./outputs/council_anime2face_256_256/anime2face_council_folder/checkpoints/b2a_gen_3_01000000.pt --input_folder ./datasets/selfie2anime/testB --a2b 0

Download Pretrain Models

Download pretrain male to female model:

bash ./scripts/download.sh pretrain_male_to_female
Then to convert images in --input_folder run:
python test_on_folder.py --config pretrain/m2f/256/male2female_council_folder.yaml --output_folder ./outputs/male2famle_256_256 --checkpoint pretrain/m2f/256/01000000 --input_folder ./datasets/celeba_male2female/testA --a2b 1

Download pretrain glasses removal model:

bash ./scripts/download.sh pretrain_glasses_removal
Then to convert images in --input_folder run:
python test_on_folder.py --config pretrain/glasses_removal/128/galsses_council_folder.yaml --output_folder ./outputs/council_glasses_128_128 --checkpoint pretrain/glasses_removal/128/01000000 --input_folder ./datasets/glasses/testA --a2b 1

Download pretrain selfie to anime model:

bash ./scripts/download.sh pretrain_selfie_to_anime
Then to convert images in --input_folder run:
python test_on_folder.py --config pretrain/anime/256/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint pretrain/anime/256/01000000 --input_folder ./datasets/selfie2anime/testB --a2b 0

Test GUI:

gan_council_overview

test GUI on pretrain model:

male2female
python test_gui.py --config pretrain/m2f/128/male2female_council_folder.yaml --checkpoint pretrain/m2f/128/a2b_gen_0_01000000.pt --a2b 1
glasses Removal
python test_gui.py --config pretrain/glasses_removal/128/galsses_council_folder.yaml --checkpoint pretrain/glasses_removal/128/a2b_gen_3_01000000.pt --a2b 1
selfie2anime
python test_gui.py --config pretrain/anime/256/anime2face_council_folder.yaml --checkpoint pretrain/anime/256/b2a_gen_3_01000000.pt --a2b 0

Open In Colab

Citation

@inproceedings{nizan2020council,
  title={Breaking the Cycle - Colleagues are all you need},
  author={Ori Nizan and Ayellet Tal},
  booktitle={IEEE conference on computer vision and pattern recognition (CVPR)},
  year={2020}
}

Acknowledgement

In this work we based our code on MUNIT implementation. Please cite the original MUNIT if you use their part of the code.

Comments
  • Comparing Different models

    Comparing Different models

    Hi, Could you please tell me how you compared different models? Did you use the same learning rate, number of epochs, Number of decay epochs, image size, optimizer among all models? Also, did you collect test results using the final saved generator or did you use the best results testing all saved generators at different epochs?

    opened by mohammadshahabuddin 2
  • pretrained model loading error

    pretrained model loading error

    I downloaded pretrained selfie to anime model,

    when using python test_on_folder.py, in trainer.gen_b2a_s[i].load_state_dict(state_dict['b2a']), I got an error, it says:

    {RuntimeError}Error(s) in loading state_dict for AdaINGen: Missing key(s) in state_dict: "enc_content.model.2.conv.weight", "enc_content.model.2.conv.bias", "enc_content.model.3.model.0.model.0.conv.weight", "enc_content.model.3.model.0.model.0.conv.bias", "enc_content.model.3.model.0.model.1.conv.weight", "enc_content.model.3.model.0.model.1.conv.bias", "enc_content.model.3.model.1.model.0.conv.weight", "enc_content.model.3.model.1.model.0.conv.bias", "enc_content.model.3.model.1.model.1.conv.weight", "enc_content.model.3.model.1.model.1.conv.bias", "enc_content.model.3.model.2.model.0.conv.weight", "enc_content.model.3.model.2.model.0.conv.bias", "enc_content.model.3.model.2.model.1.conv.weight", "enc_content.model.3.model.2.model.1.conv.bias", "enc_content.model.3.model.3.model.0.conv.weight", "enc_content.model.3.model.3.model.0.conv.bias", "enc_content.model.3.model.3.model.1.conv.weight", "enc_content.model.3.model.3.model.1.conv.bias", "enc_content.model.3.model.4.model.0.conv.weight", "enc_...

    opened by arufus 2
  • modified Dockerfile line 34 curl -> wget line 73 add '\'

    modified Dockerfile line 34 curl -> wget line 73 add '\'

    To download miniconda using curl command is not working so I changed curl -> wget To using wget I add word at line 15

    At line 73 there is no '' so It caused error. so I added ''

    opened by kmkwon94 1
  • Copy yaml file error while training

    Copy yaml file error while training

    When running the train.py model, it tries to copy the config yaml file to the output directory along with timestamp using shutil. But, it does not copy because of ":" in the datetime.

    It is in line 98 in train.py. I suggest to change str(datetime.datetime.now())[:19] with str(datetime.datetime.now())[:19].replace(':', '')

    opened by SomarajuHarsha 0
  • 256x256 pretrained model for glass removal

    256x256 pretrained model for glass removal

    Hi, I downloaded the pretrained models, found both 128x128 and 256x256 versions for selfie2anime and male2female, but only 128x128 for glasses removal. Could you release the 256x256 model for glasses removal? Thanks.

    opened by xunings 0
Owner
ori nizan
Computer Vision & Deep Learning PhD student
ori nizan
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Gyungin Shin 59 Sep 25, 2022
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

Fastformer-PyTorch Unofficial PyTorch implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Usage : import t

Hong-Jia Chen 126 Dec 6, 2022
Unofficial Tensorflow-Keras implementation of Fastformer based on paper [Fastformer: Additive Attention Can Be All You Need](https://arxiv.org/abs/2108.09084).

Fastformer-Keras Unofficial Tensorflow-Keras implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Tensorflo

Yam Peleg 10 Jan 30, 2022
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch>=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 1, 2023
Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021

Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021 [WIP] The code for CVPR 2021 paper 'Disentangled Cycle Consistency for H

ChongjianGE 94 Dec 11, 2022
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Rishit Dagli 139 Dec 28, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 4, 2023
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

null 230 Dec 7, 2022
Implementation of ConvMixer-Patches Are All You Need? in TensorFlow and Keras

Patches Are All You Need? - ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in t

Sayan Nath 8 Oct 3, 2022
Implementation of Vaswani, Ashish, et al. "Attention is all you need."

Attention Is All You Need Paper Implementation This is my from-scratch implementation of the original transformer architecture from the following pape

Brando Koch 195 Dec 30, 2022
TensorFlow implementation of "Attention is all you need (Transformer)"

[TensorFlow 2] Attention is all you need (Transformer) TensorFlow implementation of "Attention is all you need (Transformer)" Dataset The MNIST datase

YeongHyeon Park 4 Jan 5, 2022
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Facebook Research 1k Jan 8, 2023
Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Mozhdeh Gheini 16 Jul 16, 2022
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

null 194 Jan 3, 2023