Collection of generative models in Tensorflow

Overview

tensorflow-generative-model-collections

Tensorflow implementation of various GANs and VAEs.

Related Repositories

Pytorch version

Pytorch version of this repository is availabel at https://github.com/znxlwm/pytorch-generative-model-collections

"Are GANs Created Equal? A Large-Scale Study" Paper

https://github.com/google/compare_gan is the code that was used in the paper.
It provides IS/FID and rich experimental results for all gan-variants.

Generative Adversarial Networks (GANs)

Lists

Name Paper Link Value Function
GAN Arxiv
LSGAN Arxiv
WGAN Arxiv
WGAN_GP Arxiv
DRAGAN Arxiv
CGAN Arxiv
infoGAN Arxiv
ACGAN Arxiv
EBGAN Arxiv
BEGAN Arxiv

Variants of GAN structure

Results for mnist

Network architecture of generator and discriminator is the exaclty sames as in infoGAN paper.
For fair comparison of core ideas in all gan variants, all implementations for network architecture are kept same except EBGAN and BEGAN. Small modification is made for EBGAN/BEGAN, since those adopt auto-encoder strucutre for discriminator. But I tried to keep the capacity of discirminator.

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type 
   
     --epoch 25 --batch_size 64

   

Random generation

All results are randomly sampled.

Name Epoch 2 Epoch 10 Epoch 25
GAN
LSGAN
WGAN
WGAN_GP
DRAGAN
EBGAN
BEGAN

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 10 Epoch 25
CGAN
ACGAN
infoGAN

InfoGAN : Manipulating two continous codes

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.
Fashion-mnist is a recently proposed dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot)

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type 
   
     --epoch 40 --batch_size 64

   

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 20 Epoch 40
GAN
LSGAN
WGAN
WGAN_GP
DRAGAN
EBGAN
BEGAN

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 20 Epoch 40
CGAN
ACGAN
infoGAN

Without hyper-parameter tuning from mnist-version, ACGAN/infoGAN does not work well as compared with CGAN.
ACGAN tends to fall into mode-collapse.
infoGAN tends to ignore noise-vector. It results in that various style within the same class can not be represented.

InfoGAN : Manipulating two continous codes

Some results for celebA

(to be added)

Variational Auto-Encoders (VAEs)

Lists

Name Paper Link Loss Function
VAE Arxiv
CVAE Arxiv
DVAE Arxiv (to be added)
AAE Arxiv (to be added)

Variants of VAE structure

Results for mnist

Network architecture of decoder(generator) and encoder(discriminator) is the exaclty sames as in infoGAN paper. The number of output nodes in encoder is different. (2x z_dim for VAE, 1 for GAN)

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type 
   
     --epoch 25 --batch_size 64

   

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 10 Epoch 25
VAE
GAN

Results of GAN is also given to compare images generated from VAE and GAN. The main difference (VAE generates smooth and blurry images, otherwise GAN generates sharp and artifact images) is cleary observed from the results.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 10 Epoch 25
CVAE
CGAN

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Learned manifold

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type VAE --epoch 25 --batch_size 64 --dim_z 2

Please notice that dimension of noise-vector z is 2.

Name Epoch 1 Epoch 10 Epoch 25
VAE

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type 
   
     --epoch 40 --batch_size 64

   

Random generation

All results are randomly sampled.

Name Epoch 1 Epoch 20 Epoch 40
VAE
GAN

Results of GAN is also given to compare images generated from VAE and GAN.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Name Epoch 1 Epoch 20 Epoch 40
CVAE
CGAN

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Learned manifold

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type VAE --epoch 25 --batch_size 64 --dim_z 2

Please notice that dimension of noise-vector z is 2.

Name Epoch 1 Epoch 10 Epoch 25
VAE

Results for celebA

(to be added)

Folder structure

The following shows basic folder structure.

├── main.py # gateway
├── data
│   ├── mnist # mnist data (not included in this repo)
│   |   ├── t10k-images-idx3-ubyte.gz
│   |   ├── t10k-labels-idx1-ubyte.gz
│   |   ├── train-images-idx3-ubyte.gz
│   |   └── train-labels-idx1-ubyte.gz
│   └── fashion-mnist # fashion-mnist data (not included in this repo)
│       ├── t10k-images-idx3-ubyte.gz
│       ├── t10k-labels-idx1-ubyte.gz
│       ├── train-images-idx3-ubyte.gz
│       └── train-labels-idx1-ubyte.gz
├── GAN.py # vanilla GAN
├── ops.py # some operations on layer
├── utils.py # utils
├── logs # log files for tensorboard to be saved here
└── checkpoint # model files to be saved here

Acknowledgements

This implementation has been based on this repository and tested with Tensorflow over ver1.0 on Windows 10 and Ubuntu 14.04.

Comments
  • Continuous latent loss possibly wrong in InfoGAN

    Continuous latent loss possibly wrong in InfoGAN

    In InfoGAN.py line 146, the squared loss is being optimized as follows.

    cont_code_est = code_fake[:, self.len_discrete_code:]
    cont_code_tg = self.y[:, self.len_discrete_code:]
    q_cont_loss = tf.reduce_mean(tf.reduce_sum(tf.square(cont_code_tg - cont_code_est), axis=1))
    

    The code_fake vector is in [0,1] as it comes from a softmax non-linearity. The actual latent code being sent, however, is in [-1,1]. (See line 234).

    batch_codes = np.concatenate((batch_labels, np.random.uniform(-1, 1, size=(self.batch_size, 2))),
                                                 axis=1)
    

    Am I missing something?

    opened by abdulfatir 1
  • Question about LSGAN

    Question about LSGAN

    Thanks for your great work! I find that the discriminatior in LSGAN calculated by sigmoid(logits). But according to the original paper D(x) should be logits directly. If the sigmoid function is used in D(x), the problem of gradient vanishing still exists. I'm not sure whether i misinterpreted the idea of the paper.

    opened by lanshuofeng 1
  • I want to know how to use celebA's label to train CGAN!

    I want to know how to use celebA's label to train CGAN!

    The mnist dataset has a tag file but the label of the celebA dataset represents more than just a class of images, some of which represent several classes. How to deal with celebA's label?

    opened by TwistedW 1
  • WGAN_GP - tensorflow

    WGAN_GP - tensorflow

    Hey, I appreciate your work! You make my life better.

    I found (maybe) a small bug in your WGAN_GP code. When calculating gradient penalty, you write:

    D_inter,_,_=self.discriminator(interpolates, is_training=True, reuse=True) 
    gradients = tf.gradients(D_inter, [interpolates])[0]
    

    You use the sigmoid output of the Discriminator, not the logits.

    In the original implementation (https://github.com/igul222/improved_wgan_training/blob/master/gan_mnist.py), they write this:

    gradients = tf.gradients(Discriminator(interpolates), [interpolates])[0]
    

    Here, the authors only return the logits of the Discriminator. So they use the logits for this calculation.

    Did you do this on purpose?

    Greetings!

    opened by Naxter 1
  • Bug in model constructors

    Bug in model constructors

    You have this line in all the constructors:

    if dataset_name == 'mnist' or 'fashion-mnist':
    

    It will always be True. You want if dataset_name == 'mnist' or dataset_name == 'fashion-mnist' or if dataset_name in ['mnist', 'fashion-mnist'].

    opened by goldsborough 1
  • link Fashion-MNIST readme to this notebook

    link Fashion-MNIST readme to this notebook

    Hi, I'm the author of Fashion-MNIST dataset. I found the GANs results on Fashion-MNIST is very interesting, especially how ACGAN and infoGAN failed on Fashion-MNIST. I already highlighted this notebook in Fashion-MNIST README.md. I hope it is ok for you. 🙇

    Here are the links: https://github.com/zalandoresearch/fashion-mnist#other-explorations-of-fashion-mnist https://github.com/zalandoresearch/fashion-mnist/blob/master/README.zh-CN.md#生成对抗网络-gans

    opened by hanxiao 1
  • Tensorflow implementation of color image data(eg. cifar10) VAE, or Pytorch implementation of VAE?

    Tensorflow implementation of color image data(eg. cifar10) VAE, or Pytorch implementation of VAE?

    Thanks for sharing the code.

    However, it seems the Tensorflow VAE doesn't have implementation for color images, eg. cifar10 data, only MNIST and fashion MNIST is available.Also, the Pytorch version doesn't have VAE implementation at all.

    Is there anyway to use VAE for color mages. Thanks.

    opened by k123jack 0
  • Maybe it's unnecessary to use weight clipping in the LSGAN?

    Maybe it's unnecessary to use weight clipping in the LSGAN?

    Thank you for sharing your codes.

    I found that there is no weight clipping in the paper Least Squares Generative Adversarial Networks. The weight clipping is used in the paper Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. Obviously, your code is the former.

    I tried both situations, and the performance is better without weight clipping.

    I changed the network structure and adjusted the super-parameters to apply to data cifar-10. I thought maybe someone needed it, so I left the url of my code. https://github.com/AliceAria/Performance-comparison-of-GAN-on-cifar-10

    Thanks again.

    opened by AliceAria 0
  • discriminator los sfunction of w-gan  is wrong  ?

    discriminator los sfunction of w-gan is wrong ?

    Thanks for your code! I found that Discriminator los sfunction of w-gan is the same as Discriminator los sfunction of gan in your code,but actually not! Pleace check it!

    opened by changkaiyupingwen 0
  • Gradient Penalty code error in WGAN_GP

    Gradient Penalty code error in WGAN_GP

    at 113 line in WGAN_GP, I recommend changing the code

    alpha = tf.random_uniform(shape=self.inputs.get_shape(), minval=0.,maxval=1.) to alpha = tf.random_uniform(shape=[BATCH_SIZE,1,1,1], minval=0.,maxval=1.)

    Because It must be created one alpha value for each batch

    opened by jwc0906 2
  • Not an issue

    Not an issue

    Hey @hwalsuklee,

    Thanks for sharing your code which helps me a lot. I have a question about the linear function in ops.py

    I would be appreciated if you can explain what you are doing in the this function.

    def linear(input_, output_size, scope=None, stddev=0.02, bias_start=0.0, with_w=False):
        shape = input_.get_shape().as_list()
    
        with tf.variable_scope(scope or "Linear"):
            matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,
                     tf.random_normal_initializer(stddev=stddev))
            bias = tf.get_variable("bias", [output_size],
            initializer=tf.constant_initializer(bias_start))
            if with_w:
                return tf.matmul(input_, matrix) + bias, matrix, bias
            else:
                return tf.matmul(input_, matrix) + bias
    

    Also, It would be great if you add some comments when you are implementing; to be much easier for others to understand.

    Thanks.

    opened by Auth0rM0rgan 2
Owner
null
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.4k Dec 31, 2022
Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

Junho Kim 16 Apr 15, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
Catch-all collection of generative art made using processing

Generative art with Processing.py Some art I have created for fun. Dependencies Processing for Python, see how to download/use here Packages contained

null 2 Mar 12, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

null 41 Apr 28, 2022
Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord.

numpy2tfrecord Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord. Installation

Ryo Yonetani 2 Jan 16, 2022
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
Bayesian Image Reconstruction using Deep Generative Models

Bayesian Image Reconstruction using Deep Generative Models R. Marinescu, D. Moyer, P. Golland For technical inquiries, please create a Github issue. F

Razvan Valentin Marinescu 51 Nov 23, 2022
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.

Documentation: https://mmgeneration.readthedocs.io/ Introduction English | 简体中文 MMGeneration is a powerful toolkit for generative models, especially f

OpenMMLab 1.3k Dec 29, 2022
Generative Models for Graph-Based Protein Design

Graph-Based Protein Design This repo contains code for Generative Models for Graph-Based Protein Design by John Ingraham, Vikas Garg, Regina Barzilay

John Ingraham 159 Dec 15, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
Pytorch implementation of Generative Models as Distributions of Functions 🌿

Generative Models as Distributions of Functions This repo contains code to reproduce all experiments in Generative Models as Distributions of Function

Emilien Dupont 117 Dec 29, 2022
Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images"

Reverse_Engineering_GMs Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Gener

null 100 Dec 18, 2022
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

International Business Machines 71 Nov 15, 2022
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Jooyoung Choi 88 Dec 1, 2022
Generative Models as a Data Source for Multiview Representation Learning

GenRep Project Page | Paper Generative Models as a Data Source for Multiview Representation Learning Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip

Ali 81 Dec 3, 2022