Bayesian Generative Adversarial Networks in Tensorflow

Overview

Bayesian Generative Adversarial Networks in Tensorflow

This repository contains the Tensorflow implementation of the Bayesian GAN by Yunus Saatchi and Andrew Gordon Wilson. This paper appears at NIPS 2017.

Please cite our paper if you find this code useful in your research. The bibliographic information for the paper is

@inproceedings{saatciwilson,
  title={Bayesian gan},
  author={Saatci, Yunus and Wilson, Andrew G},
  booktitle={Advances in neural information processing systems},
  pages={3622--3631},
  year={2017}
}

Contents

  1. Introduction
  2. Dependencies
  3. Training options
  4. Usage
    1. Installation
    2. Synthetic Data
    3. Examples: MNIST, CIFAR10, CelebA, SVHN
    4. Custom data

Introduction

In the Bayesian GAN we propose conditional posteriors for the generator and discriminator weights, and marginalize these posteriors through stochastic gradient Hamiltonian Monte Carlo. Key properties of the Bayesian approach to GANs include (1) accurate predictions on semi-supervised learning problems; (2) minimal intervention for good performance; (3) a probabilistic formulation for inference in response to adversarial feedback; (4) avoidance of mode collapse; and (5) a representation of multiple complementary generative and discriminative models for data, forming a probabilistic ensemble.

We illustrate a multimodal posterior over the parameters of the generator. Each setting of these parameters corresponds to a different generative hypothesis for the data. We show here samples generated for two different settings of this weight vector, corresponding to different writing styles. The Bayesian GAN retains this whole distribution over parameters. By contrast, a standard GAN represents this whole distribution with a point estimate (analogous to a single maximum likelihood solution), missing potentially compelling explanations for the data.

Dependencies

This code has the following dependencies (version number crucial):

  • python 2.7
  • tensorflow==1.0.0

To install tensorflow 1.0.0 on linux please follow instructions at https://www.tensorflow.org/versions/r1.0/install/.

  • scikit-learn==0.17.1

You can install scikit-learn 0.17.1 with the following command

pip install scikit-learn==0.17.1

Alternatively, you can create a conda environment and set it up using the provided environment.yml file, as such:

conda env create -f environment.yml -n bgan

then load the environment using

source activate bgan

Usage

Installation

  1. Install the required dependencies
  2. Clone this repository

Synthetic Data

To run the synthetic experiment from the paper you can use bgan_synth script. For example, the following comand will train the Bayesian GAN (with D=100 and d=10) for 5000 iterations and store the results in .

./bgan_synth.py --x_dim 100 --z_dim 10 --numz 10 --out 
   

   

To run the ML GAN for the same data run

./bgan_synth.py --x_dim 100 --z_dim 10 --numz 1 --out 
   

   

bgan_synth has --save_weights, --out_dir, --z_dim, --numz, --wasserstein, --train_iter and --x_dim parameters. x_dim contolls the dimensionality of the observed data (x in the paper). For description of other parameters please see Training options.

Once you run the above two commands you will see the output of each 100th iteration in . So, for example, the Bayesian GAN's output at the 900th iteration will look like:

In contrast, the output of the standard GAN (corresponding to numz=1, which forces ML estimation) will look like:

indicating clearly the tendency of mode collapse in the standard GAN which, for this synthetic example, is completely avoided by the Bayesian GAN.

To explore the sythetic experiment further, and to generate the Jensen-Shannon divergence plots, you can check out the notebook synth.ipynb.

Unsupervised and Semi-Supervised Learning on benchmark datasets

MNIST, CIFAR10, CelebA, SVHN

bayesian_gan_hmc script allows to train the model on standard and custom datasets. Below we describe the usage of this script.

Data preparation

To reproduce the experiments on MNIST, CIFAR10, CelebA and SVHN datasets you need to prepare the data and use a correct --data_path.

  • for MNIST you don't need to prepare the data and can provide any --data_path;
  • for CIFAR10 please download and extract the python version of the data from https://www.cs.toronto.edu/~kriz/cifar.html; then use the path to the directory containing cifar-10-batches-py as --data_path;
  • for SVHN please download train_32x32.mat and test_32x32.mat files from http://ufldl.stanford.edu/housenumbers/ and use the directory containing these files as your --data_path;
  • for CelebA you will need to have openCV installed. You can find the download links for the data at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. You will need to create celebA folder with Anno and img_align_celeba subfolders. Anno must contain the list_attr_celeba.txt and img_align_celeba must contain the .jpg files. You will also need to crop the images by running datasets/crop_faces.py script with --data_path where is the path to the folder containing celebA. When training the model, you will need to use the same for --data_path;

Unsupervised training

You can run unsupervised learning by running the bayesian_gan_hmc script without --semi parameter. For example, use

./run_bgan.py --data_path 
   
     --dataset svhn --numz 10 --num_mcmc 2 --out_dir 

    
      --train_iter 75000 --save_samples --n_save 100

    
   

to train the model on the SVHN dataset. This command will run the method for 75000 iterations and save samples every 100 iterations. Here must lead to the directory where the results will be stored. See data preparation section for an explanation of how to set . See training options section for a description of other training options.

         

Semi-supervised training

To run the semi-supervised experiments you can use the run_bgan_semi.py script, which offers many options including the following:

  • --out_dir: path to the folder, where the outputs will be stored
  • --n_save: samples and weights are saved every n_save iterations; default 100
  • --z_dim: dimensionalit of z vector for generator; default 100
  • --data_path: path to the data; see data preparation for a detailed discussion; this parameter is required
  • --dataset: can be mnist, cifar, svhn or celeb; default mnist
  • --batch_size: batch size for training; default 64
  • --prior_std: std of the prior distribution over the weights; default 1
  • --num_gen: same as J in the paper; number of samples of z to integrate it out for generators; default 1
  • --num_disc: same as J_D in the paper; number of samples of z to integrate it out for discriminators; default 1
  • --num_mcmc: same as M in the paper; number of MCMC NN weight samples per z; default 1
  • --lr: learning rate used by the Adam optimizer; default 0.0002
  • --optimizer: optimization method to be used: adam (tf.train.AdamOptimizer) or sgd (tf.train.MomentumOptimizer); default adam
  • --N: number of labeled samples for semi-supervised learning
  • --train_iter: number of training iterations; default 50000
  • --save_samples: save generated samples during training
  • --save_weights: save weights during training
  • --random_seed: random seed; note that setting this seed does not lead to 100% reproducible results if GPU is used

You can also run WGANs with --wasserstein or train an ensemble of DCGANs with --ml_ensemble . In particular you can train a DCGAN with --ml.

You can train the model in semi-supervised setting by running bayesian_gan_hmc with --semi option. Use -N parameter to set the number of labeled examples to train on. For example, use

./run_bgan_semi.py --data_path 
   
     --dataset cifar --num_gen 10 --num_mcmc 2
--out_dir 
    
      --train_iter 100000 --N 4000 --lr 0.0005

    
   

to train the model on CIFAR10 dataset with 4000 labeled examples. This command will train the model for 100000 iterations and store the outputs in folder.

To train the model on MNIST with 100 labeled examples you can use the following command.

./bayesian_gan_hmc.py --data_path 
   
    / --dataset mnist --num_gen 10 --num_mcmc 2
--out_dir 
    
      --train_iter 100000 -N 100 --semi --lr 0.0005

    
   

Custom data

To train the model on a custom dataset you need to define a class with a specific interface. Suppose we want to train the model on the digits dataset. This datasets consists of 8x8 images of digits. Let's suppose that the data is stored in x_tr.npy, y_tr.npy, x_te.npy and y_te.npy files. We will assume that x_tr.npy and x_te.npy have shapes of the form (?, 8, 8, 1). We can then define the class corresponding to this dataset in bgan_util.py as follows.

class Digits:

    def __init__(self):
        self.imgs = np.load('x_tr.npy') 
        self.test_imgs = np.load('x_te.npy')
        self.labels = np.load('y_tr.npy')
        self.test_labels = np.load('y_te.npy')
        self.labels = one_hot_encoded(self.labels, 10)
        self.test_labels = one_hot_encoded(self.test_labels, 10) 
        self.x_dim = [8, 8, 1]
        self.num_classes = 10

    @staticmethod
    def get_batch(batch_size, x, y): 
        """Returns a batch from the given arrays.
        """
        idx = np.random.choice(range(x.shape[0]), size=(batch_size,), replace=False)
        return x[idx], y[idx]

    def next_batch(self, batch_size, class_id=None):
        return self.get_batch(batch_size, self.imgs, self.labels)

    def test_batch(self, batch_size):
        return self.get_batch(batch_size, self.test_imgs, self.test_labels)

The class must have next_batch and test_batch, and must have the imgs, labels, test_imgs, test_labels, x_dim and num_classes fields.

Now we can import the Digits class in bayesian_gan_hmc.py

from bgan_util import Digits

and add the following lines to to the processing of --dataset parameter.

if args.dataset == "digits":
    dataset = Digits()

After this preparation is done, we can train the model with, for example,

./run_bgan_semi.py --data_path 
   
     --dataset digits --num_gen 10 --num_mcmc 2 
--out_dir 
    
      --train_iter 100000 --save_samples

    
   

Acknowledgements

We thank Pavel Izmailov and Ben Athiwaratkun for help with stress testing this code and creating the tutorial.

Comments
  •  Type error in semisupervised code

    Type error in semisupervised code

    I am struggeling to run your semi-supervised code on CIFAR. I have followed the README and set up tensorflow in conda accordingly with correct versions. When I run (as close as possible from the README):

    ./run_bgan_semi.py --data_path ./datasets/ --dataset cifar --num_gen 10 --num_mcmc 2 --out_dir cifar_out --train_iter 100000 --N 4000 --lr 0.0005
    

    I get:

    Iter 100
    d_losses: [None]
    disc_info: [None, None, 7.3618059, 7.3462958]
    Traceback (most recent call last):
      File "./run_bgan_semi.py", line 419, in <module>
        b_dcgan(dataset, args)
      File "./run_bgan_semi.py", line 214, in b_dcgan
        print "Disc losses = %s" % (", ".join(["%.2f" % dl for dl in d_losses]))
    TypeError: float argument required, not NoneType
    

    This only happens at the 100th iteration (and I have printed the respective variables to show that there are unexpected Nones in there), so I guess the None types in the d_losses are not a problem before. Any ideas? Thanks for any help :).

    opened by whilo 4
  • ValueError: Variable discriminator/d_bn1/moving_mean already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?

    ValueError: Variable discriminator/d_bn1/moving_mean already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?

    I am using tf1.3,it shows as follow, how to solve this problem? Traceback (most recent call last): File "I:/python3/bayesgan-master/run_bgan_semi.py", line 413, in b_dcgan(dataset, args) File "I:/python3/bayesgan-master/run_bgan_semi.py", line 141, in b_dcgan num_classes=dataset.num_classes) File "I:\python3\bayesgan-master\bgan_semi.py", line 78, in init self.build_bgan_graph() File "I:\python3\bayesgan-master\bgan_semi.py", line 274, in build_bgan_graph self.K, disc_params) File "I:\python3\bayesgan-master\bgan_semi.py", line 401, in discriminator w=disc_params["d_h%i_W" % layer], biases=disc_params["d_h%i_b" % layer]), train=train)) File "I:\python3\bayesgan-master\dcgan_ops.py", line 46, in call scope=self.name) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py", line 592, in batch_norm scope=scope) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py", line 373, in _fused_batch_norm collections=moving_mean_collections) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\framework\python\ops\variables.py", line 262, in model_variable use_resource=use_resource) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args return func(*args, **current_args) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\contrib\framework\python\ops\variables.py", line 217, in variable use_resource=use_resource) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1203, in get_variable constraint=constraint) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1092, in get_variable constraint=constraint) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 425, in get_variable constraint=constraint) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 394, in _true_getter use_resource=use_resource, constraint=constraint) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 742, in _get_single_variable name, "".join(traceback.format_list(tb)))) ValueError: Variable discriminator/d_bn1/moving_mean already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:

    File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op op_def=op_def) File "D:\ProgramData\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def)

    opened by howardgriffin 4
  • Multiple discriminator MC samples

    Multiple discriminator MC samples

    Dear authors, great work on the BayesGAN paper and code, congratulations! I had a question about your code - is there currently support for multiple discriminator MC samples (J_d > 1)? If not, is there any reason why it's left out? Thanks!

    opened by ysaibhargav 3
  • Error when run bayesian_gan_hmc.py under tensorflow 1.3.0

    Error when run bayesian_gan_hmc.py under tensorflow 1.3.0

    Hi Andrew, I've just run bayesian_gan_hmc.py under tensorflow 1.3.0,here is the error: Traceback (most recent call last): File "/home/jqh/jiangqiuhua/eclipse/plugins/org.python.pydev_6.2.0.201711281614/pysrc/pydevd.py", line 1621, in <module> main() File "/home/jqh/jiangqiuhua/eclipse/plugins/org.python.pydev_6.2.0.201711281614/pysrc/pydevd.py", line 1615, in main globals = debugger.run(setup['file'], None, None, is_module) File "/home/jqh/jiangqiuhua/eclipse/plugins/org.python.pydev_6.2.0.201711281614/pysrc/pydevd.py", line 1022, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/home/jqh/jiangqiuhua/Tensorflow/bayesgan-master/bayesian_gan_hmc.py", line 442, in <module> b_dcgan(dataset, args) File "/home/jqh/jiangqiuhua/Tensorflow/bayesgan-master/bayesian_gan_hmc.py", line 141, in b_dcgan num_classes=dataset.num_classes if args.semi_supervised else 1) File "/home/jqh/jiangqiuhua/Tensorflow/bayesgan-master/bgan_models.py", line 357, in __init__ self.build_bgan_graph() File "/home/jqh/jiangqiuhua/Tensorflow/bayesgan-master/bgan_models.py", line 144, in build_bgan_graph self.generation["generators"].append(self.generator(self.z, gen_params)) File "/home/jqh/jiangqiuhua/Tensorflow/bayesgan-master/bgan_models.py", line 415, in generator h0 = tf.nn.relu(self.g_bn0(self.h0, reuse=reuse)) ValueError: Variable generator/g_bn0/moving_mean already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: Can I run this under tensorflow 1.3.0?

    opened by githubjqh 3
  • Mode collapse is a serious problem in Bayesian GAN

    Mode collapse is a serious problem in Bayesian GAN

    Dear authors, As can be seen from generated samples in figure 2, 6, 7 and 8 mode collapse is a serious problem in Bayesian GAN. Every generator has mode collapse and different generators collapse to the same modes. In figure 6, for example, generator 1 and 4 both have mode collapse and they collapse to the same mode (row 2, col 3 of generator 1 and row 3, col 3 of generator 4). If we consider the mode count method based on birthday paradox (Arora et al. 2017) then when mode collapse happens with high probability then the number of mode in the model distribution is about the same as the batch size. Mode collapse happens with batch size of only 16, that implies that each generator captures only tens of modes. The total capacity of 10 generators is, therefore, much smaller than a single generator trained with normal method. This is contrast to your claim that Bayesian GAN explore a broader region of the target distribution. In my opinion, the current setting for Bayesian GAN makes mode collapse worse.

    opened by htt210 2
  • 'BDCGAN' object has no attribute 'd_optim_semi_adam'

    'BDCGAN' object has no attribute 'd_optim_semi_adam'

    In attempting run the MNIST example (with the below, most basic command) I ran into the below error.

    Traceback (most recent call last):
      File "bayesgan/bayesian_gan_hmc.py", line 431, in <module>
        b_dcgan(dataset, args)
      File "bayesgan/bayesian_gan_hmc.py", line 158, in b_dcgan
        optimizer_dict = {"semi_d": dcgan.d_optim_semi_adam,
    AttributeError: 'BDCGAN' object has no attribute 'd_optim_semi_adam'
    

    My environment is compiled with the environment.yaml and my command to run the code is:

    bayesgan/bayesian_gan_hmc.py --data_path /home/jlandesman --dataset mnist --out_dir results/mnist --save_samples --n_save 100

    The synth data appears to work well.

    Any thoughts?

    Many thanks for your help and congratulations on the paper.

    opened by jlandesman 2
  • Add WGANGP to comparison

    Add WGANGP to comparison

    Hi Andrew, I've just done some experiments with WGAN with Gradient Penalty (Improved Training of Wasserstein GANs, Gulrajani et al.) and found that it can converge to a reasonable solution on the synthetic dataset. Although WGANGP does not converge as fast as bayesgan, I think it would be nice if you could add WGANGP to the baselines in your experiments. Here is the output of my (very bad) implementation of WGANGP after 8000 iterations

    nx = 100
    nz = 10
    batchSize = 64
    Gconfig = [('Linear', (nz, 1000)), ('ReLU', ()), ('Linear', (1000, nx))]
    Dconfig = [('Linear', (nx, 1000)), ('ReLU', ()), ('Linear', (1000, 1))]
    optimizer = 'Adam'
    optimParams = {'lr': 1e-4, 'betas': (0.5, 0.9)}
    

    iter_8000

    opened by htt210 1
  • SGD +momentum + noise

    SGD +momentum + noise "=" SGHMC

    Hello, I was going through your paper and through the paper of SGHMC and I understand that :

    • SGD + noise equates to SGLD
    • SGD + momentum + noise equates to SGHMC However, I don't understand what Adam + noise which is what you used in your code equates to? in the paper you said that you were going to use SGHMC. Is it reasonable to assume that Adam + noise is also equivalent to SGHMC? if so, can you please say why?

    Thanks

    opened by emiled16 0
  • d_losses is [None] while running ./run_bgan.py --data_path datasets --dataset mnist --num_mcmc 2 --out_dir ./results/ --train_iter 75000 --save_samples --n_save 100

    d_losses is [None] while running ./run_bgan.py --data_path datasets --dataset mnist --num_mcmc 2 --out_dir ./results/ --train_iter 75000 --save_samples --n_save 100

    Hello, I want to run the unsupervised BGAN with MNIST dataset using this command: ./run_bgan.py --data_path datasets --dataset mnist --num_mcmc 2 --out_dir ./results/ --train_iter 75000 --save_samples --n_save 100 But I got an error:

    
    Starting session
    Starting training loop
    Iter 100
    [None]
    Gen losses = 3.56, 5.86
    saving results and samples
    Traceback (most recent call last):
      File "./run_bgan.py", line 296, in <module>
        b_dcgan(dataset, args)
      File "./run_bgan.py", line 103, in b_dcgan
        results = {"disc_losses": map(float, d_losses),
    TypeError: float() argument must be a string or a number
    
    

    I printed d_losses and got [None]. Any help? Thank you

    opened by khorshidisamira 0
  • Missing bayesian_gan_hmc script?

    Missing bayesian_gan_hmc script?

    Hello, I might be missing something obvious but I've been looking at it in every way possible and I can't grasp what I would be missing.

    In the guide you submitted, a script, which is quite central to most of the things we can do with your repository, seems to be missing. It's the bayesian_gan_hmc script.

    Do you know where I could find it? It would be of a great help to my problem.

    Thank you.

    opened by yanntrividic 0
  • The testing accuracy is not based on testing set, but rather manipulated result

    The testing accuracy is not based on testing set, but rather manipulated result

    In the testing set, basically what you are doing is first to find what images are not fake and then find the accuracy when they are classified as real.

    So the accuracy can be really high when there is only a small number of images classified as real, which is kind of like cheating.

    opened by WayneDW 0
  • Question regarding train and test parameters

    Question regarding train and test parameters

    If I have generated numpy matrices as my "real data", do I specify that that as 'self.imgs = np.load('matrices.npy')'? I intend to run this in unsupervised mode, so do I need to supply the other parameters such as self.labels, self.test_imgs and self.test_labels? Sorry if this is a basic question, I am fairly new to GANs.

    opened by enochkan 1
Owner
Andrew Gordon Wilson
Machine Learning Professor at New York University.
Andrew Gordon Wilson
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 25.1k Jan 2, 2023
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

null 3k Jan 8, 2023
[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac

Shengyu Zhao 373 Jan 2, 2023
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

Google 148 Nov 18, 2022
NR-GAN: Noise Robust Generative Adversarial Networks

NR-GAN: Noise Robust Generative Adversarial Networks (CVPR 2020) This repository provides PyTorch implementation for noise robust GAN (NR-GAN). NR-GAN

Takuhiro Kaneko 59 Dec 11, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 8, 2022
Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper

AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear

Rohit Kukreja 23 Jul 21, 2022
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 375 Dec 31, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 3, 2023
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 13.4k Jan 8, 2023
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Orest Kupyn 2.2k Jan 1, 2023
Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated

Arun 92 Dec 3, 2022
Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations

ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary

Somshubra Majumdar 15 Feb 10, 2022
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"

Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects

Sitao Xiang 182 Sep 6, 2021
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 4, 2023
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

SK T-Brain 754 Dec 29, 2022