InterfaceGAN++: Exploring the limits of InterfaceGAN
Authors: Apavou Clément & Belkada Younes
From left to right - Images generated using styleGAN and the boundaries Bald, Blond, Heavy_Makeup, Gray_Hair
This the the repository to a project related to the Introduction to Numerical Imaging (i.e, Introduction à l'Imagerie Numérique in French), given by the MVA Masters program at ENS-Paris Saclay. The project and repository is based on the work from Shen et al., and fully supports their codebase. You can refer to the original README) to reproduce their results.
- Introduction
-
🔥 Additional features -
🔨 Training an attribute detection classifier -
⭐ Generate images using StyleGAN & StyleGAN2 & StyleGAN3 -
✏️ Edit generated images
Introduction
In this repository, we propose an approach, termed as InterFaceGAN++, for semantic face editing based on the work from Shen et al. Specifically, we leverage the ideas from the previous work, by applying the method for new face attributes, and also for StyleGAN3. We qualitatively explain that moving the latent vector toward the trained boundaries leads in many cases to keeping the semantic information of the generated images (by preserving its local structure) and modify the desired attribute, thus helps to demonstrate the disentangled property of the styleGANs.
🔥
Additional features
- Supports StyleGAN2 & StyleGAN3 on the classic attributes
- New attributes (Bald, Gray hair, Blond hair, Earings, ...) for:
- StyleGAN
- StyleGAN2
- StyleGAN3
- Supports face generation using StyleGAN3 & StyleGAN2
The list of new features can be found on our attributes detection classifier repository
🔨
Training an attribute detection classifier
We use a ViT-base model to train an attribute detection classifier, please refer to our classification code if you want to test it for new models. Once you retrieve the trained SVM from this repo, you can directly move them in this repo and use them.
⭐
Generate images using StyleGAN & StyleGAN2 & StyleGAN3
We did not changed anything to the structure of the old repository, please refer to the previous README. For StyleGAN
🎥
Get the pretrained StyleGAN
We use the styleGAN trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P interfacegan/models/pretrain https://www.dropbox.com/s/qyv37eaobnow7fu/stylegan_ffhq.pth
🎥
Get the pretrained StyleGAN2
We use the styleGAN2 trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P models/pretrain https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-1024x1024.pkl
🎥
Get the pretrained StyleGAN3
We use the styleGAN3 trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P models/pretrain https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-ffhq-1024x1024.pkl
The pretrained model should be copied at models/pretrain
. If not, move the pretrained model file at this directory.
🎨
Run the generation script
If you want to generate 10 images using styleGAN3 downloaded before, run:
python generate_data.py -m stylegan3_ffhq -o output_stylegan3 -n 10
The arguments are exactly the same as the arguments from the original repository, the code supports the flag -m stylegan3_ffhq
for styleGAN3 and -m stylegan3_ffhq
for styleGAN2.
✏️
Edit generated images
You can edit the generated images using our trained boundaries! Depending on the generator you want to use, make sure that you have downloaded the right model and put them into models/pretrain
.
Examples
Please refer to our interactive google colab notebook to play with our models by clicking the following badge:
StyleGAN
Example of generated images using StyleGAN and moving the images towards the direction of the attribute grey hair:
StyleGAN2
Example of generated images using StyleGAN2 and moving the images towards the opposite direction of the attribute young:
StyleGAN3
Example of generated images using StyleGAN3 and moving the images towards the attribute beard: