GANSketching in Jittor
Implementation of (Sketch Your Own GAN) in Jittor(计图).
Original repo: Here.
Notice
We have tried to match official implementation as close as possible, but we may still miss some details. If you find any bugs when using this implementation, feel free to submit issues.
Results
Our implementation can customize a pre-trained GAN to match input sketches like the original paper.
Training Process
Training process is smooth.
Speed-up
Comparing with the PyTorch version, our implementation can achieve up to 1.67x speed-up with StyleGAN2 inference, up to 1.62x speed-up with pix2pix inference and 1.06x speed-up with model training process.
Getting Started
Clone our repo
git clone [email protected]:thkkk/GANSketching_Jittor.git
cd GANSketching_Jittor
Install packages
-
Install Jittor: Please refer to https://cg.cs.tsinghua.edu.cn/jittor/download/.
-
Install other requirements:
pip install -r requirements.txt
Download model weights
- Run
bash weights/download_weights.sh
to download author's pretrained weights, or download our pretrained weights from here. - Feel free to replace all the
.pth
checkpoint filenames to.jt
ones.
Generate samples from a customized model
This command runs the customized model specified by ckpt
, and generates samples to save_dir
.
# generates samples from the "standing cat" model.
python generate.py --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/samples_standing_cat
# generates samples from the cat face model in Figure. 1 of the paper.
python generate.py --ckpt weights/by_author_cat_aug.pth --save_dir output/samples_teaser_cat
# generates samples from the customized ffhq model.
python generate.py --ckpt weights/by_author_face0_aug.pth --save_dir output/samples_ffhq_face0 --size 1024 --batch_size 4
Latent space edits by GANSpace
Our model preserves the latent space editability of the original model. Our models can apply the same edits using the latents reported in Härkönen et.al. (GANSpace).
# add fur to the standing cats
python ganspace.py --obj cat --comp_id 27 --scalar 50 --layers 2,4 --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/ganspace_fur_standing_cat
# close the eyes of the standing cats
python ganspace.py --obj cat --comp_id 45 --scalar 60 --layers 5,7 --ckpt weights/photosketch_standing_cat_noaug.pth --save_dir output/ganspace_eye_standing_cat
Model Training
Training and evaluating on model trained on PhotoSketch inputs requires running the Precision and Recall metric. The following command pulls the submodule of the forked Precision and Recall repo.
git submodule update --init --recursive
Download Datasets and Pre-trained Models
The following scripts downloads our sketch data, our evaluation set, LSUN, and pre-trained models from StyleGAN2 and PhotoSketch.
# Download the sketches
bash data/download_sketch_data.sh
# Download evaluation set
bash data/download_eval_data.sh
# Download pretrained models from StyleGAN2 and PhotoSketch
bash pretrained/download_pretrained_models.sh
# Download LSUN cat, horse, and church dataset
bash data/download_lsun.sh
To train FFHQ models with image regularization, please download the FFHQ dataset using this link. This is the zip file of 70,000 images at 1024x1024 resolution. Unzip the files, , rename the images1024x1024
folder to ffhq
and place it in ./data/image/
.
Training Scripts
The example training configurations are specified using the scripts in scripts
folder. Use the following commands to launch trainings.
# Train the "horse riders" model
bash scripts/train_photosketch_horse_riders.sh
# Train the cat face model in Figure. 1 of the paper.
bash scripts/train_teaser_cat.sh
# Train on a single quickdraw sketch
bash scripts/train_quickdraw_single_horse0.sh
# Train on sketches of faces (1024px)
bash scripts/train_authorsketch_ffhq0.sh
# Train on sketches of gabled church.
bash scripts/train_church.sh
# Train on sketches of standing cat.
bash scripts/train_standing_cat.sh
The training progress is tracked using wandb
by default. To disable wandb logging, please add the --no_wandb
tag to the training script.
Evaluations
Please make sure the evaluation set and model weights are downloaded before running the evaluation.
# You may have run these scripts already in the previous sections
bash weights/download_weights.sh
bash data/download_eval_data.sh
Use the following script to evaluate the models, the results will be saved in a csv file specified by the --output
flag. --models_list
should contain a list of tuple of model weight paths and evaluation data. Please see weights/eval_list
for example.
python run_metrics.py --models_list weights/eval_list --output metric_results.csv
Related Works
- R. Gal, O. Patashnik, H. Maron, A. Bermano, G. Chechik, D. Cohen-Or. "StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators.". In ArXiv. (concurrent work)
- D. Bau, S. Liu, T. Wang, J.-Y. Zhu, A. Torralba. "Rewriting a Deep Generative Model". In ECCV 2020.
- Y. Wang, A. Gonzalez-Garcia, D. Berga, L. Herranz, F. S. Khan, J. van de Weijer. "MineGAN: effective knowledge transfer from GANs to target domains with few images". In CVPR 2020.
- M. Eitz, J. Hays, M. Alexa. "How Do Humans Sketch Objects?". In SIGGRAPH 2012.