InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

Overview

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

Python 3.7 pytorch 1.1.0 TensorFlow 1.12.2 sklearn 0.21.2

image Figure: High-quality facial attributes editing results with InterFaceGAN.

In this repository, we propose an approach, termed as InterFaceGAN, for semantic face editing. Specifically, InterFaceGAN is capable of turning an unconditionally trained face synthesis model to controllable GAN by interpreting the very first latent space and finding the hidden semantic subspaces.

[Paper (CVPR)] [Paper (TPAMI)] [Project Page] [Demo] [Colab]

How to Use

Pick up a model, pick up a boundary, pick up a latent code, and then EDIT!

# Before running the following code, please first download
# the pre-trained ProgressiveGAN model on CelebA-HQ dataset,
# and then place it under the folder ".models/pretrain/".
LATENT_CODE_NUM=10
python edit.py \
    -m pggan_celebahq \
    -b boundaries/pggan_celebahq_smile_boundary.npy \
    -n "$LATENT_CODE_NUM" \
    -o results/pggan_celebahq_smile_editing

GAN Models Used (Prior Work)

Before going into details, we would like to first introduce the two state-of-the-art GAN models used in this work, which are ProgressiveGAN (Karras el al., ICLR 2018) and StyleGAN (Karras et al., CVPR 2019). These two models achieve high-quality face synthesis by learning unconditional GANs. For more details about these two models, please refer to the original papers, as well as the official implementations.

ProgressiveGAN: [Paper] [Code]

StyleGAN: [Paper] [Code]

Code Instruction

Generative Models

A GAN-based generative model basically maps the latent codes (commonly sampled from high-dimensional latent space, such as standart normal distribution) to photo-realistic images. Accordingly, a base class for generator, called BaseGenerator, is defined in models/base_generator.py. Basically, it should contains following member functions:

  • build(): Build a pytorch module.
  • load(): Load pre-trained weights.
  • convert_tf_model() (Optional): Convert pre-trained weights from tensorflow model.
  • sample(): Randomly sample latent codes. This function should specify what kind of distribution the latent code is subject to.
  • preprocess(): Function to preprocess the latent codes before feeding it into the generator.
  • synthesize(): Run the model to get synthesized results (or any other intermediate outputs).
  • postprocess(): Function to postprocess the outputs from generator to convert them to images.

We have already provided following models in this repository:

  • ProgressiveGAN:
    • A clone of official tensorflow implementation: models/pggan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/pggan_generator_model.py.
    • Generator class derived from BaseGenerator: models/pggan_generator.py.
    • Please download the official released model trained on CelebA-HQ dataset and place it in folder models/pretrain/.
  • StyleGAN:
    • A clone of official tensorflow implementation: models/stylegan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/stylegan_generator_model.py.
    • Generator class derived from BaseGenerator: models/stylegan_generator.py.
    • Please download the official released models trained on CelebA-HQ dataset and FF-HQ dataset and place them in folder models/pretrain/.
    • Support synthesizing images from $\mathcal{Z}$ space, $\mathcal{W}$ space, and extended $\mathcal{W}$ space (18x512).
    • Set truncation trick and noise randomization trick in models/model_settings.py. Among them, STYLEGAN_RANDOMIZE_NOISE is highly recommended to set as False. STYLEGAN_TRUNCATION_PSI = 0.7 and STYLEGAN_TRUNCATION_LAYERS = 8 are inherited from official implementation. Users can customize their own models. NOTE: These three settings will NOT affect the pre-trained weights.
  • Customized model:
    • Users can do experiments with their own models by easily deriving new class from BaseGenerator.
    • Before used, new model should be first registered in MODEL_POOL in file models/model_settings.py.

Utility Functions

We provide following utility functions in utils/manipulator.py to make InterFaceGAN much easier to use.

  • train_boundary(): This function can be used for boundary searching. It takes pre-prepared latent codes and the corresponding attributes scores as inputs, and then outputs the normal direction of the separation boundary. Basically, this goal is achieved by training a linear SVM. The returned vector can be further used for semantic face editing.
  • project_boundary(): This function can be used for conditional manipulation. It takes a primal direction and other conditional directions as inputs, and then outputs a new normalized direction. Moving latent code along this new direction will manipulate the primal attribute yet barely affect the conditioned attributes. NOTE: For now, at most two conditions are supported.
  • linear_interpolate(): This function can be used for semantic face editing. It takes a latent code and the normal direction of a particular semantic boundary as inputs, and then outputs a collection of manipulated latent codes with linear interpolation. These interpolation can be used to see how the synthesis will vary if moving the latent code along the given direction.

Tools

  • generate_data.py: This script can be used for data preparation. It will generate a collection of syntheses (images are saved for further attribute prediction) as well as save the input latent codes.

  • train_boundary.py: This script can be used for boundary searching.

  • edit.py: This script can be usd for semantic face editing.

Usage

We take ProgressiveGAN model trained on CelebA-HQ dataset as an instance.

Prepare data

NUM=10000
python generate_data.py -m pggan_celebahq -o data/pggan_celebahq -n "$NUM"

Predict Attribute Score

Get your own predictor for attribute $ATTRIBUTE_NAME, evaluate on all generated images, and save the inference results as data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy. NOTE: The save results should be with shape ($NUM, 1).

Search Semantic Boundary

python train_boundary.py \
    -o boundaries/pggan_celebahq_"$ATTRIBUTE_NAME" \
    -c data/pggan_celebahq/z.npy \
    -s data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy

Compute Conditional Boundary (Optional)

This step is optional. It depends on whether conditional manipulation is needed. Users can use function project_boundary() in file utils/manipulator.py to compute the projected direction.

Boundaries Description

We provided following boundaries in folder boundaries/. The boundaries can be more accurate if stronger attribute predictor is used.

  • ProgressiveGAN model trained on CelebA-HQ dataset:

    • Single boundary:
      • pggan_celebahq_pose_boundary.npy: Pose.
      • pggan_celebahq_smile_boundary.npy: Smile (expression).
      • pggan_celebahq_age_boundary.npy: Age.
      • pggan_celebahq_gender_boundary.npy: Gender.
      • pggan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
      • pggan_celebahq_quality_boundary.npy: Image quality.
    • Conditional boundary:
      • pggan_celebahq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • pggan_celebahq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • pggan_celebahq_age_c_gender_eyeglasses_boundary.npy: Age (conditioned on gender and eyeglasses).
      • pggan_celebahq_gender_c_age_boundary.npy: Gender (conditioned on age).
      • pggan_celebahq_gender_c_eyeglasses_boundary.npy: Gender (conditioned on eyeglasses).
      • pggan_celebahq_gender_c_age_eyeglasses_boundary.npy: Gender (conditioned on age and eyeglasses).
      • pggan_celebahq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • pggan_celebahq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
      • pggan_celebahq_eyeglasses_c_age_gender_boundary.npy: Eyeglasses (conditioned on age and gender).
  • StyleGAN model trained on CelebA-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_celebahq_pose_boundary.npy: Pose.
      • stylegan_celebahq_smile_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_boundary.npy: Age.
      • stylegan_celebahq_gender_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_celebahq_pose_w_boundary.npy: Pose.
      • stylegan_celebahq_smile_w_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_w_boundary.npy: Age.
      • stylegan_celebahq_gender_w_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_w_boundary.npy: Eyeglasses.
  • StyleGAN model trained on FF-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_pose_boundary.npy: Pose.
      • stylegan_ffhq_smile_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_boundary.npy: Age.
      • stylegan_ffhq_gender_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_boundary.npy: Eyeglasses.
    • Conditional boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • stylegan_ffhq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • stylegan_ffhq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • stylegan_ffhq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_ffhq_pose_w_boundary.npy: Pose.
      • stylegan_ffhq_smile_w_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_w_boundary.npy: Age.
      • stylegan_ffhq_gender_w_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_w_boundary.npy: Eyeglasses.

BibTeX

@inproceedings{shen2020interpreting,
  title     = {Interpreting the Latent Space of GANs for Semantic Face Editing},
  author    = {Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei},
  booktitle = {CVPR},
  year      = {2020}
}
@article{shen2020interfacegan,
  title   = {InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs},
  author  = {Shen, Yujun and Yang, Ceyuan and Tang, Xiaoou and Zhou, Bolei},
  journal = {TPAMI},
  year    = {2020}
}
Comments
  • Issue learning latent encoding for new faces

    Issue learning latent encoding for new faces

    I am trying to derive latent encodings for cutom faces, as done in https://github.com/Puzer/stylegan-encoder.

    Here are the details after porting the same to pytorch:

    from models.stylegan_generator import StyleGANGenerator
    
    #load the pre-trained synthesis network
    m_synth = StyleGANGenerator("stylegan_ffhq").model.synthesis.cuda().eval()
    
    #process the output of the synthesis module
    class PostProcAfterSynth(nn.Module):
        def __init__(self):
            super(PostProcAfterSynth, self).__init__()
        def forward(self, gen_img):
            #remap to [0,1]
            return (gen_img+1)/2
        
    post_proc_layer = PostProcAfterSynth()
    
    #preprocess the generated image before feeding into perceptual model    
    class PreProcBeforePerception(nn.Module):
        def __init__(self, img_size):
            super(PreProcBeforePerception, self).__init__()
            self.img_size = img_size
            self.mean = torch.tensor([0.485, 0.456, 0.406], device=device).view(-1, 1, 1)
            self.std = torch.tensor([0.229, 0.224, 0.225], device=device).view(-1, 1, 1)
        def forward(self, gen_img):
            #resize input image
            gen_img = F.adaptive_avg_pool2d(gen_img, self.img_size)
            #normalize
            gen_img = (gen_img - self.mean) / self.std
            return gen_img
        
    pre_proc_layer = PreProcBeforePerception(img_size=256)
    
    #use pre-trained vgg model for feature extraction
    m_vgg = models.vgg16(pretrained=True).features[:16].to(device).eval()
    
    #set up the model
    model = nn.Sequential(m_synth)
    model.add_module(str(1), post_proc_layer)
    model.add_module(str(2), pre_proc_layer)
    model.add_module(str(3), m_vgg)
    
    for param in model.parameters():
        param.requires_grad_(False)
    

    print(m_vgg)

    Sequential(
      (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): ReLU(inplace)
      (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): ReLU(inplace)
      (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (6): ReLU(inplace)
      (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (8): ReLU(inplace)
      (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (11): ReLU(inplace)
      (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (13): ReLU(inplace)
      (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (15): ReLU(inplace)
    )
    

    As done by Puzer, I select the [conv->conv->pool->conv->conv->pool->conv->conv->conv] section of the vgg network for feature extraction.

    Pre-computing the features for the reference image:

    ref_img_path = "."
    ref_img = np.array(Image.open(ref_img_path))
    ref_img = ref_img.astype(np.float32)/255.
    ref_img = np.array([np.transpose(ref_img, (2,0,1))])
    ref_img = torch.tensor(ref_img, device=device)
    ref_img = pre_proc_layer(ref_img)
    ref_img_features = m_vgg(ref_img).detach()
    

    Optimization:

    trainable_latent = torch.randn((1,18,512), device=device).requires_grad_(True)
    loss_func = torch.nn.MSELoss()
    
    optimizer = optim.SGD([trainable_latent], lr=0.5)
    
    losses = []
    for i in tqdm(range(1000)):
        optimizer.zero_grad()
        gen_img_features = model(trainable_latent)
        loss = loss_func(gen_img_features, ref_img_features)
        loss_val = loss.data.cpu()
        losses.append(loss_val)
        loss.backward()
        optimizer.step()
    

    The latent encoding and subsequent generated images are of a poor quality. The results are nowhere near as crisp as that by Puzer.

    What I have tried:

    1. Learning Z space latent instead of WP+
    2. Variety of optimizers, learning rate, iterations combos

    What could be wrong:

    1. There might be issues with my pipeline above (new to pytorch)
    2. There might be some difference in pre-trained vgg networks for pytorch and keras, that I might have failed to take into account.
    3. The perceptual model used is not complex enough. (but it does work for Puzer)

    Any help with the above would be much appreciated.

    opened by njordsir 9
  • Can it make an adult into a baby?

    Can it make an adult into a baby?

    Hi, the age demo in the paper makes an adult into a child, could you please tell me what will if I set the age attribute as an extreme value?

    Can it make an adult into a one-year-old baby?Can it keep the generator output a normal human face?

    opened by WJ-Lai 6
  • AssertionError: Torch not compiled with CUDA enabled

    AssertionError: Torch not compiled with CUDA enabled

    (base) PS E:\darshan\pytorch_stylegan_encoder-master\InterFaceGAN> python generate_data.py -m stylegan_ffhq -o data/pggan_celebahq -n 10000 [2020-01-20 03:53:48,282][INFO] Initializing generator. [2020-01-20 03:53:48,521][WARNING] No pre-trained model will be loaded! Traceback (most recent call last): File "generate_data.py", line 111, in main() File "generate_data.py", line 64, in main model = StyleGANGenerator(args.model_name, logger) File "E:\darshan\pytorch_stylegan_encoder-master\InterFaceGAN\models\stylegan_generator.py", line 42, in init super().init(model_name, logger) File "E:\darshan\pytorch_stylegan_encoder-master\InterFaceGAN\models\base_generator.py", line 103, in init self.model.eval().to(self.run_device) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 426, in to return self._apply(convert) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 202, in _apply module._apply(fn) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 202, in _apply module._apply(fn) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 202, in _apply module._apply(fn) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 224, in apply param_applied = fn(param) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 424, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\cuda_init.py", line 192, in _lazy_init check_driver() File "C:\Users\HpZ8\Anaconda3\lib\site-packages\torch\cuda_init.py", line 95, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

    opened by drprajapati 6
  • generate_data.py generates empty images

    generate_data.py generates empty images

    Hi guys,

    Thank you for your great work. I have a question.

    I found that generate_data.py works very well with my style-gan models, but when I trained PGGAN with the same dataset of images and then ran generate_data.py for generating 10k images, all images look like this:

    000009

    I tried another checkpoint but it gave similar results:

    000015

    I guess it is caused by some misconfiguration between my PGGAN and the inference code of InterFaceGAN, but haven't found it yet. Can you give me some advice please?

    opened by doantientai 6
  • How to perform StyleGAN inversion?

    How to perform StyleGAN inversion?

    Hi Yujun,

    In the paper you claimed that it must use GAN inversion method to map real images to latent codes, and StyleGAN inversion methods are much better, are there documents introducing how to do the inversion? Any comments are appreciated! Best Regards.

    opened by damonzhou 6
  • Issues converting custom pggan models

    Issues converting custom pggan models

    It seems that for pggan, state dictionaries for the model trained by paper authors (file karras2018iclr-celebahq-1024x1024.pkl) and custom models that are trained by their released code are different. You can check the differences here: https://www.diffchecker.com/0hFYlK82.

    Unfortunately I was unable to tweak your code so that it would convert my trained model correctly.

    Did you try converting custom trained model using pggan code? If so, could you please provide a model you trained that worked for you other than the one provided by pggan authors?

    opened by AndrewMakar 6
  • How to manipulate real faces?

    How to manipulate real faces?

    Dear author, after checking this repository, I have found that there isn't included a encoder-decoder model as the paper tests in Figure 11. WILL this release in the near future?

    opened by AddASecond 5
  • Roll and Pitch for Pose data

    Roll and Pitch for Pose data

    Hi Shenyujun, Thanks so much for awesome work! I saw you have pose direction in repo, but there is no roll or pitch rotation for pose in that direction. Do you think it's possible to train with both pitch, roll and yaw? BTW, any attribute predictor or data could share regarding the pose generation? That will be really helpful, not found in original stylegan repo. Thanks

    opened by shiyingZhang90 5
  • quality on stylegan_ffhq

    quality on stylegan_ffhq

    Hi, thanks for the paper and the results are impressive!

    I tested the code with "stylegan_ffhq" model and "stylegan_ffhq_pose_boundary.npy or stylegan_ffhq_pose_w_boundary.npy", with the default settings, but the results are not very good.

    The person identity, age, even gender changed simultaneously with the pose. Regarding to the "stylegan_ffhq_pose_w_boundary.npy", the degree of pose changes are more or less ignorable.

    python edit.py -m stylegan_ffhq -o results/stylegan_ffhq_pose_w_boundary -b ./boundaries/stylegan_ffhq_pose_w_boundary.npy -n 10

    Is there anything that I have to adjust?

    opened by IQ17 5
  • Questions about the truncation module.

    Questions about the truncation module.

    I have a question on your implementation of the truncation module. Why do the first 9 channels of the W+ code are same? It looks like you separate W+ code into just 2 blocks but 18 blocks? This is strange for that in the official code each channel(I mean 18, not 512) of W+ code is different.

    opened by gsygsy96 5
  • Attributes predictor

    Attributes predictor

    Hi! Thanks for your great research! Is there any chance you can provide attributes predictor that you used for finding boundaries? I didn't really get about scores that it should give to attributes on the pictures. Also for example for binary attributes like glasses/no glasses is it possible to manually put labels of say 1 for glasses, 0 for no glasses and feed this data to train a new boundary?

    opened by muxgt 5
  • Can I use my own face for Semantic Face Editing

    Can I use my own face for Semantic Face Editing

    Hi there!

    Thank you so much for such wonderful repo, appreciate your efforts!

    Can you guys tell me, How can provide my own input images for experimentation, instead of some celeb data? @ShenYujun @younesbelkada @clementapa

    In order to do it, What are the changes I need to make in colab file? I am using this file: https://colab.research.google.com/github/genforce/interfacegan/blob/master/docs/InterFaceGAN.ipynb#scrollTo=ccONBF60mVir

    Basically, I want to use this code to check it on my set of images.

    Any help be appreciated, Thanks in advance!!

    opened by Niraj-Lunavat 0
  • Access to training modules

    Access to training modules

    I am looking for training module in your provided code files to train the model on boats data. Can you please guide me where I can find required files to train your model on my data? Or is there any other way to fine tune existing models?

    opened by shahbazbaig 0
  • AttributeError: module 'tensorflow' has no attribute 'get_variable'

    AttributeError: module 'tensorflow' has no attribute 'get_variable'

    InterFaceGAN_error Hello, I am trying to implement your code but I keep getting this error despite changing all 'tf.' functions to 'tf.compat.v1.' I solved couple of no module error but I cannot find this one anywhere in the .py files. Is it something happens upon loading the model? Thank you in advance for your help :)

    opened by hamediut 0
  • About editing

    About editing

    I can get latent code by using encoder network to invert real image, and then put it into pre-train StyleGAN2, I can get reconstruction result. It works well. But I save this latent code as .npy and use : python edit.py -m stylegan_celebahq -b boundaries/stylegan_celebahq_eyeglasses_boundary.npy -i MY_LATENT_CODES_FILE.npy -o results/my_image -s wp

    picture is strange, like this: 000_000

    opened by JNash123 2
Owner
GenForce: May Generative Force Be with You
Research on Generative Modeling in Zhou Group
GenForce: May Generative Force Be with You
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
Disentangled Face Attribute Editing via Instance-Aware Latent Space Search, accepted by IJCAI 2021.

Instance-Aware Latent-Space Search This is a PyTorch implementation of the following paper: Disentangled Face Attribute Editing via Instance-Aware Lat

null 67 Dec 21, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Face Identity Disentanglement via Latent Space Mapping Description Official Implementation of the paper Face Identity Disentanglement via Latent Space

null 150 Dec 7, 2022
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Daniel Roich 58 Dec 24, 2022
Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

null 31 Sep 27, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Navigating StyleGAN2 w latent space using CLIP

Navigating StyleGAN2 w latent space using CLIP an attempt to build sth with the official SG2-ADA Pytorch impl kinda inspired by Generating Images from

Mike K. 55 Dec 6, 2022
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space

Update (20 Jan 2020): MODALS on text data is avialable MODALS MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space Table of Conte

null 38 Dec 15, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021)

Authors official PyTorch implementation of the "WarpedGANSpace: Finding non-linear RBF paths in GAN latent space" [ICCV 2021].

Christos Tzelepis 100 Dec 6, 2022
Code for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space"

SRHEN This is a better and simpler implementation for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in

null 1 Oct 28, 2022
Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

HamasKhan 3 Jul 8, 2022
[CVPR 2022] TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing

TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing (CVPR 2022) This repository provides the official PyTorch impleme

Billy XU 128 Jan 3, 2023