StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Overview

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Open In Colab arXiv

[Project Website] [Replicate.ai Project]

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Abstract:
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained blindly? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image from those domains. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.

Description

This repo contains the official implementation of StyleGAN-NADA, a Non-Adversarial Domain Adaptation for image generators. At a high level, our method works using two paired generators. We initialize both using a pre-trained model (for example, FFHQ). We hold one generator constant and train the other by demanding that the direction between their generated images in clip space aligns with some given textual direction.

The following diagram illustrates the process:

We set up a colab notebook so you can play with it yourself :) Let us know if you come up with any cool results!

We've also included inversion in the notebook (using ReStyle) so you can use the paired generators to edit real images. Most edits will work well with the pSp version of ReStyle, which also allows for more accurate reconstructions. In some cases, you may need to switch to the e4e based encoder for better editing at the cost of reconstruction accuracy.

Updates

03/10/2021 (A) Interpolation video script now supports InterfaceGAN based-editing.
03/10/2021 (B) Updated the notebook with support for target style images.
03/10/2021 (C) Added replicate.ai support. You can now run inference or generate videos without needing to setup anything or work with code.
22/08/2021 Added a script for generating cross-domain interpolation videos (similar to the top video in the project page).
21/08/2021 (A) Added the ability to mimic styles from an image set. See the usage section.
21/08/2021 (B) Added dockerized UI tool.
21/08/2021 (C) Added link to drive with pre-trained models.

Generator Domain Adaptation

We provide many examples of converted generators in our project page. Here are a few samples:

Setup

The code relies on the official implementation of CLIP, and the Rosinality pytorch implementation of StyleGAN2.

Requirements

  • Anaconda
  • Pretrained StyleGAN2 generator (can be downloaded from here). You can also download a model from here and convert it with the provited script. See the colab notebook for examples.

In addition, run the following commands:

conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=<CUDA_VERSION>
pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git

Usage

To convert a generator from one domain to another, use the colab notebook or run the training script in the ZSSGAN directory:

python train.py --size 1024 
                --batch 2 
                --n_sample 4 
                --output_dir /path/to/output/dir 
                --lr 0.002 
                --frozen_gen_ckpt /path/to/stylegan2-ffhq-config-f.pt 
                --iter 301 
                --source_class "photo" 
                --target_class "sketch" 
                --auto_layer_k 18
                --auto_layer_iters 1 
                --auto_layer_batch 8 
                --output_interval 50 
                --clip_models "ViT-B/32" "ViT-B/16" 
                --clip_model_weights 1.0 1.0 
                --mixing 0.0
                --save_interval 150

Where you should adjust size to match the size of the pre-trained model, and the source_class and target_class descriptions control the direction of change. For an explenation of each argument (and a few additional options), please consult ZSSGAN/options/train_options.py. For most modifications these default parameters should be good enough. See the colab notebook for more detailed directions.

21/08/2021 Instead of using source and target texts, you can now target a style represented by a few images. Simply replace the --source_class and --target_class options with:

--style_img_dir /path/to/img/dir

where the directory should contain a few images (png, jpg or jpeg) with the style you want to mimic. There is no need to normalize or preprocess the images in any form.

Some results of converting an FFHQ model using children's drawings, LSUN Cars using Dali paintings and LSUN Cat using abstract sketches:

Pre-Trained Models

We provide a Google Drive containing an assortment of models used in the paper, tweets and other locations. If you want access to a model not yet included in the drive, please let us know.

Docker

We now provide a simple dockerized interface for training models. The UI currently supports a subset of the colab options, but does not require repeated setups.

In order to use the docker version, you must have a CUDA compatible GPU and must install nvidia-docker and docker-compose first.

After cloning the repo, simply run:

cd StyleGAN-nada/
docker-compose up
  • Downloading the docker for the first time may take a few minutes.
  • While the docker is running, the UI should be available under http://localhost:8888/
  • The UI was tested using an RTX3080 GPU with 16GB of RAM. Smaller GPUs may run into memory limits with large models.

If you find the UI useful and want it expended to allow easier access to saved models, support for real image editing etc., please let us know.

Editing Video

In order to generate a cross-domain editing video (such as the one at the top of our project page), prepare a set of edited latent codes in the original domain and run the following generate_videos.py script in the ZSSGAN directory:

python generate_videos.py --ckpt /model_dir/pixar.pt             \
                                 /model_dir/ukiyoe.pt            \
                                 /model_dir/edvard_munch.pt      \
                                 /model_dir/botero.pt            \
                          --out_dir /output/video/               \
                          --source_latent /latents/latent000.npy \
                          --target_latents /latents/
  • The script relies on ffmpeg to function. On linux it can be installed by running sudo apt install ffmpeg
  • The argument to --ckpt is a list of model checkpoints used to fill the grid.
    • The number of models must be a perfect square, e.g. 1, 4, 9...
  • The argument to --target_latents can be either a directory containing a set of .npy w-space latent codes, or a list of individual files.
  • Please see the script for more details.

We provide example latent codes for the same identity used in our video. If you want to generate your own, we recommend using StyleCLIP, InterFaceGAN, StyleFlow, GANSpace or any other latent space editing method.

03/10/2021 We now provide editing directions for use in video generation. To use the built-in directions, omit the --target_latents argument. You can use specific editing directions from the available list by passing them with the --edit_directions flag. See generate_videos.py for more information.

Related Works

The concept of using CLIP to guide StyleGAN generation results was introduced in StyleCLIP (Patashnik et al.).

We invert real images into the GAN's latent space using ReStyle (Alaluf et al.).

Editing directions for video generation were taken from Anycost GAN (Lin et al.).

Citation

If you make use of our work, please cite our paper:

@misc{gal2021stylegannada,
      title={StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators}, 
      author={Rinon Gal and Or Patashnik and Haggai Maron and Gal Chechik and Daniel Cohen-Or},
      year={2021},
      eprint={2108.00946},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Additional examples:

Our method can be used to enable out-of-domain editing of real images, using pre-trained, off-the-shelf inversion networks. Here are a few more examples:

Comments
  • Problems trying to convert the sg2 model

    Problems trying to convert the sg2 model

    I am currently trying to run the program and I need to convert the ffhq.pkl model do a .pt one.

    When I enter python stylegan_nada\convert_weight.py --repo stylegan_ada --gen models/ffhq.pkl it says:

    Traceback (most recent call last): File "stylegan_nada\convert_weight.py", line 11, in <module> from ZSSGAN.model.sg2_model import Generator, Discriminator File "C:\Users\msk4x\Documents\Projekte\stylegan\stylegan_nada\ZSSGAN\sg2_model.py", line 11, in <module> from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix

    How can I fix this?

    opened by xXLeoXxOne 29
  • How can we re-train the frozen generator on a custom dataset?

    How can we re-train the frozen generator on a custom dataset?

    Hi!

    Thanks for putting out the great work! I am interested in training the frozen generator on a custom dataset. Could you please guide me, or kindly share the training code? Would sincerely appreciate any help.

    Thanks!

    opened by romesa-khan 21
  • Docker image old version of code

    Docker image old version of code

    Am I right that in the docker container old version of the code? Try to pass images to train the model, but the result is different. Model does not use images to train.

    opened by MAGLeb 12
  • Buidling Inference Model

    Buidling Inference Model

    Hello, I'm here to build a inference model to test several things that are exhibited on Additional examples in your README, such as turning photos into cubism painting style

    but it is a bit challenging to find proper pre-trained models (if you've got one) for it.

    If you do happen to have them, would you be able to share the link for it?

    thanks in advance :)

    opened by Youngwoo-git 12
  • StyleGan3 Port?

    StyleGan3 Port?

    I was not able to find a StyleGan3 version of this project so I gave it a shot but got stuck because the StyleGan 2 and 3 models are apparently quite different.

    For instance, the forward function for StyleGan 3 takes two tensors 'z' and 'c'

     def forward(self, z, c, truncation_cutoff=None, update_emas=False):
    

    While StyleGan 2 it only takes one tensor 'styles':

    def forward(
        self,
        styles,
        return_latents=False,
        inject_index=None,
        truncation=1,
        truncation_latent=None,
        input_is_latent=False,
        input_is_s_code=False,
        noise=None,
        randomize_noise=True,
    ):
    

    Any suggestions? This repo seems quite powerful and would be nice if could loose the old TF support.

    opened by 3DTOPO 9
  • RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED

    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED

    Hi! when I try to start training, I ran into this problem:


    Traceback (most recent call last): File "train.py", line 147, in train(args) File "train.py", line 86, in train [sampled_src, sampled_dst], loss = net(sample_z) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/pycharm_project_1/ZSSGAN/model/ZSSGAN.py", line 278, in forward clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()])) File "/mnt/pycharm_project_1/ZSSGAN/model/ZSSGAN.py", line 278, in clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()])) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/pycharm_project_1/ZSSGAN/criteria/clip_loss.py", line 294, in forward clip_loss += self.lambda_direction * self.clip_directional_loss(src_img, source_class, target_img, target_class) File "/mnt/pycharm_project_1/ZSSGAN/criteria/clip_loss.py", line 175, in clip_directional_loss self.target_direction = self.compute_text_direction(source_class, target_class) File "/mnt/pycharm_project_1/ZSSGAN/criteria/clip_loss.py", line 113, in compute_text_direction source_features = self.get_text_features(source_class) File "/mnt/pycharm_project_1/ZSSGAN/criteria/clip_loss.py", line 97, in get_text_features text_features = self.encode_text(tokens).detach() File "/mnt/pycharm_project_1/ZSSGAN/criteria/clip_loss.py", line 73, in encode_text return self.model.encode_text(tokens) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/clip/model.py", line 344, in encode_text x = self.transformer(x) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/clip/model.py", line 199, in forward return self.resblocks(x) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/clip/model.py", line 186, in forward x = x + self.attention(self.ln_1(x)) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/clip/model.py", line 183, in attention return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 987, in forward attn_mask=attn_mask) File "/root/miniconda3/envs/myconda/lib/python3.7/site-packages/torch/nn/functional.py", line 4790, in multi_head_attention_forward attn_output_weights = torch.bmm(q, k.transpose(1, 2)) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasGemmStridedBatchedExFix( handle, opa, opb, m, n, k, (void*)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)


    My environment is: Python3.7 CUDA 11.1 cuDNN 8.0.5 Pytorch 1.8.1 Ubuntu 18.04

    Do you have any ideas about this? Thanks for your help!

    opened by wileewang 9
  • Getting an error 'tensor is not a torch image.' when trying to run train.py

    Getting an error 'tensor is not a torch image.' when trying to run train.py

    Initializing networks...
      0%|                                                                                                             | 0/301 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "train.py", line 147, in <module>
        train(args)
      File "train.py", line 86, in train
        [sampled_src, sampled_dst], loss = net(sample_z)
      File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/efs/saeid/facelab/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 260, in forward
        train_layers = self.determine_opt_layers()
      File "/home/ubuntu/efs/saeid/facelab/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 216, in determine_opt_layers
        w_loss = [self.clip_model_weights[model_name] * self.clip_loss_models[model_name].global_clip_loss(generated_from_w, self.target_class) for model_name in self.clip_model_weights.keys()]
      File "/home/ubuntu/efs/saeid/facelab/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 216, in <listcomp>
        w_loss = [self.clip_model_weights[model_name] * self.clip_loss_models[model_name].global_clip_loss(generated_from_w, self.target_class) for model_name in self.clip_model_weights.keys()]
      File "/home/ubuntu/efs/saeid/facelab/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 190, in global_clip_loss
        image  = self.preprocess(img)
      File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 60, in __call__
        img = t(img)
      File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 163, in __call__
        return F.normalize(tensor, self.mean, self.std, self.inplace)
      File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 201, in normalize
        raise TypeError('tensor is not a torch image.')
    TypeError: tensor is not a torch image.
    
    opened by samotiian 9
  • About the bug when running Demo

    About the bug when running Demo

    Hi Rinon,

    I'm so interested in the work and tried to run the demo on the project page by inputting my own picture, but I got an error like this:

    Xnip2022-06-06_11-42-13

    "local variable 'shape' referenced before assignment"

    Do you know how to fix it?

    Btw, what is the source style of the demo when I specify an input image? Is it extracted from the input image? I'm a bit confused about this.

    Thanks a lot for your time!

    opened by LaLaLailalai 7
  • how to save the pkl file from colab?

    how to save the pkl file from colab?

    i'm using this colab repo the new pkl file should end up in the checkpoints folder right? or do i have to add some particular lines of code? https://github.com/rinongal/StyleGAN-nada/blob/StyleGAN3-NADA/stylegan3_nada.ipynb

    edit: found the code but it doesn't work in the normal stylegan, is that right? and if so, how do i use it after generating? i also found out how to get the .pkl files with the save_interval but those don't work with the normal stylegan either model_name = "network-snapshot-011120.pt" torch.save( { "g_ema": net.generator_trainable.generator.state_dict(), "g_optim": g_optim.state_dict(), }, f"{ckpt_dir}/{model_name}", ) !ls /content/output/checkpoint

    is there any way to convert the file into a normal stylegan model again like this repo states it can convert a stylegan2-nada.pt to stylegan2.pkl so maybe this is possible for stylegan3? https://github.com/eps696/stylegan2ada all my pictures are slightly tilted to the left and i normally use the visualizer to fix that but it doesnt work with these files :(

    opened by nicolai256 7
  • Black images on prediction

    Black images on prediction

    image Net return for me only Nan. image

    Maybe it is connected with converting sg2 model? Also, it is really hard for me to set up the model locally. I had tried a lot of different ways.

    opened by MAGLeb 7
  • Small models for 11GB GPUs

    Small models for 11GB GPUs

    Hi. Thanks for opensourcing this amazing project. I am trying to train the network but I got OOM problem as I don't have any 16GB GPU. Could you please let me know which small models can I try on a 11GB GPU? Thanks so much!

    opened by justanhduc 7
  • When choose other than 'ffhq' for source model type

    When choose other than 'ffhq' for source model type

    Hello, thank you for your insightful work. I'm currently tring your code via colab; the error occurs when I choose other than 'ffhq' for source model type. Is there any way to try horse model?

    opened by cloudyrider 0
  • Cross-domain image interpolation

    Cross-domain image interpolation

    Hi, thanks for your great work! I'm trying to re-implement the result illustrated in Figure 19 in the supplementary materials. I tried generate_videos.py but the results are not what I want. Could you please tell me how to generate smooth interpolations between models of different domains? Thanks!

    opened by Dancingmader 3
  • Upgrade to Cog version 0.1

    Upgrade to Cog version 0.1

    The new version of Cog improves the Python API, along with several other changes. Particularly pydantic is now used for Predictor and the previous version will be deprecated.

    This PR upgrades the Replicate demo and API to Cog version >= 0.1. I have already pushed this to Replicate, so you don't need to do anything for the demo to keep working :) https://replicate.com/rinongal/stylegan-nada

    opened by chenxwh 2
  • Nvidia error when running docker-compose up

    Nvidia error when running docker-compose up

    Hi, I am working with your repo and I just noticed an error when I run docker-compose up:

    Status: Downloaded newer image for rinong/sg_nada:v1.0 Creating stylegan-nada_nada_1 ... error

    ERROR: for stylegan-nada_nada_1 Cannot create container for service nada: Unknown runtime specified nvidia

    ERROR: for nada Cannot create container for service nada: Unknown runtime specified nvidia ERROR: Encountered errors while bringing up the project.

    Do you have any suggestion on how to fix this? Thanks!

    opened by albusdemens 1
  • clip_model_weights and auto_layer_k

    clip_model_weights and auto_layer_k

    Hello,

    in you paper under Appendix I Table 3 you list different hyperparameter combinations. For the ViT-B/16 CLIP model you vary weighting it with 1.0 and with 0.0. Does a weight of 0.0 mean turning the adaptive layer selection off? If so, wouldn't different values for auto_layer_k be useless when using [1.0, 0.0] for clip_model_weights? My thinking is: if you weigh the global loss with 0, the w codes will remain untouched. This means, that you cannot rank the importance of the corresponding layers and thus not select any layers at all.

    In the same vein, does it make sense to use values other than 1.0 or 0.0 for the clip_model_weights? I would say no, because it would effectively just be another way to influence the learning rate. Or am I missing something?

    Thank you!

    opened by lebeli 3
  • Control eye position

    Control eye position

    Hello, thank you for this great work! Is there a way to control the position of the pupils inside the eyes / gaze? Maybe someone has been able to train and can share a latent that does this?

    opened by ganganstyle 2
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Daniil Pakhomov 134 Dec 19, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 6, 2023
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

null 75 Dec 29, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 9, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 5, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation Demo video: CVPR 2021 Oral: Single Channel Manipulation: Localized or attribu

Zongze Wu 267 Dec 30, 2022
Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation in PyTorch

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation Implementation of StyleSpace Analysis: Disentangled Controls for StyleGAN Ima

Xuanchi Ren 86 Dec 7, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

null 46 Nov 9, 2022