Adversarial-autoencoders - Tensorflow implementation of Adversarial Autoencoders

Overview

Adversarial Autoencoders (AAE)

  • Tensorflow implementation of Adversarial Autoencoders (ICLR 2016)
  • Similar to variational autoencoder (VAE), AAE imposes a prior on the latent variable z. Howerver, instead of maximizing the evidence lower bound (ELBO) like VAE, AAE utilizes a adversarial network structure to guides the model distribution of z to match the prior distribution.
  • This repository contains reproduce of several experiments mentioned in the paper.

Requirements

Implementation details

  • All the models of AAE are defined in src/models/aae.py.
  • Model corresponds to fig 1 and 3 in the paper can be found here: train and test.
  • Model corresponds to fig 6 in the paper can be found here: train and test.
  • Model corresponds to fig 8 in the paper can be found here: train and test.
  • Examples of how to use AAE models can be found in experiment/aae_mnist.py.
  • Encoder, decoder and all discriminators contain two fully connected layers with 1000 hidden units and RelU activation function. Decoder and all discriminators contain an additional fully connected layer for output.
  • Images are normalized to [-1, 1] before fed into the encoder and tanh is used as the output nonlinear of decoder.
  • All the sub-networks are optimized by Adam optimizer with beta1 = 0.5.

Preparation

  • Download the MNIST dataset from here.
  • Setup path in experiment/aae_mnist.py: DATA_PATH is the path to put MNIST dataset. SAVE_PATH is the path to save output images and trained model.

Usage

The script experiment/aae_mnist.py contains all the experiments shown here. Detailed usage for each experiment will be describe later along with the results.

Argument

  • --train: Train the model of Fig 1 and 3 in the paper.
  • --train_supervised: Train the model of Fig 6 in the paper.
  • --train_semisupervised: Train the model of Fig 8 in the paper.
  • --label: Incorporate label information in the adversarial regularization (Fig 3 in the paper).
  • --generate: Randomly sample images from trained model.
  • --viz: Visualize latent space and data manifold (only when --ncode is 2).
  • --supervise: Sampling from supervised model (Fig 6 in the paper) when --generate is True.
  • --load: The epoch ID of pre-trained model to be restored.
  • --ncode: Dimension of code. Default: 2
  • --dist_type: Type of the prior distribution used to impose on the hidden codes. Default: gaussian. gmm for Gaussian mixture distribution.
  • --noise: Add noise to encoder input (Gaussian with std=0.6).
  • --lr: Initial learning rate. Default: 2e-4.
  • --dropout: Keep probability for dropout. Default: 1.0.
  • --bsize: Batch size. Default: 128.
  • --maxepoch: Max number of epochs. Default: 100.
  • --encw: Weight of autoencoder loss. Default: 1.0.
  • --genw: Weight of z generator loss. Default: 6.0.
  • --disw: Weight of z discriminator loss. Default: 6.0.
  • --clsw: Weight of semi-supervised loss. Default: 1.0.
  • --ygenw: Weight of y generator loss. Default: 6.0.
  • --ydisw: Weight of y discriminator loss. Default: 6.0.

1. Adversarial Autoencoder

Architecture

Architecture Description
The top row is an autoencoder. z is sampled through the re-parameterization trick discussed in variational autoencoder paper. The bottom row is a discriminator to separate samples generate from the encoder and samples from the prior distribution p(z).

Hyperparameters

name value
Reconstruction Loss Weight 1.0
Latent z G/D Loss Weight 6.0 / 6.0
Batch Size 128
Max Epoch 400
Learning Rate 2e-4 (initial) / 2e-5 (100 epochs) / 2e-6 (300 epochs)

Usage

  • Training. Summary, randomly sampled images and latent space during training will be saved in SAVE_PATH.
python aae_mnist.py --train \
  --ncode CODE_DIM \
  --dist_type TYPE_OF_PRIOR (`gaussian` or `gmm`)
  • Random sample data from trained model. Image will be saved in SAVE_PATH with name generate_im.png.
python aae_mnist.py --generate \
  --ncode CODE_DIM \
  --dist_type TYPE_OF_PRIOR (`gaussian` or `gmm`)\
  --load RESTORE_MODEL_ID
  • Visualize latent space and data manifold (only when code dim = 2). Image will be saved in SAVE_PATH with name generate_im.png and latent.png. For Gaussian distribution, there will be one image for data manifold. For mixture of 10 2D Gaussian, there will be 10 images of data manifold for each component of the distribution.
python aae_mnist.py --viz \
  --ncode CODE_DIM \
  --dist_type TYPE_OF_PRIOR (`gaussian` or `gmm`)\
  --load RESTORE_MODEL_ID

Result

  • For 2D Gaussian, we can see sharp transitions (no gaps) as mentioned in the paper. Also, from the learned manifold, we can see almost all the sampled images are readable.
  • For mixture of 10 Gaussian, I just uniformly sample images in a 2D square space as I did for 2D Gaussian instead of sampling along the axes of the corresponding mixture component, which will be shown in the next section. We can see in the gap area between two component, it is less likely to generate good samples.
Prior Distribution Learned Coding Space Learned Manifold

2. Incorporating label in the Adversarial Regularization

Architecture

Architecture Description
The only difference from previous model is that the one-hot label is used as input of encoder and there is one extra class for unlabeled data. For mixture of Gaussian prior, real samples are drawn from each components for each labeled class and for unlabeled data, real samples are drawn from the mixture distribution.

Hyperparameters

Hyperparameters are the same as previous section.

Usage

  • Training. Summary, randomly sampled images and latent space will be saved in SAVE_PATH.
python aae_mnist.py --train --label\
  --ncode CODE_DIM \
  --dist_type TYPE_OF_PRIOR (`gaussian` or `gmm`)
  • Random sample data from trained model. Image will be saved in SAVE_PATH with name generate_im.png.
python aae_mnist.py --generate --ncode 
   
     --label --dist_type 
    
      --load 
     

     
    
   
  • Visualize latent space and data manifold (only when code dim = 2). Image will be saved in SAVE_PATH with name generate_im.png and latent.png. For Gaussian distribution, there will be one image for data manifold. For mixture of 10 2D Gaussian, there will be 10 images of data manifold for each component of the distribution.
python aae_mnist.py --viz --label \
  --ncode CODE_DIM \
  --dist_type TYPE_OF_PRIOR (`gaussian` or `gmm`) \
  --load RESTORE_MODEL_ID

Result

  • Compare with the result in the previous section, incorporating labeling information provides better fitted distribution for codes.
  • The learned manifold images demonstrate that each Gaussian component corresponds to the one class of digit. However, the style representation is not consistently represented within each mixture component as shown in the paper. For example, the right most column of the first row experiment, the lower right of digit 1 tilt to left while the lower right of digit 9 tilt to right.
Number of Label Used Learned Coding Space Learned Manifold
Use full label
10k labeled data and 40k unlabeled data

3. Supervised Adversarial Autoencoders

Architecture

Architecture Description
The decoder takes code as well as a one-hot vector encoding the label as input. Then it forces the network learn the code independent of the label.

Hyperparameters

Usage

  • Training. Summary and randomly sampled images will be saved in SAVE_PATH.
python aae_mnist.py --train_supervised \
  --ncode CODE_DIM
  • Random sample data from trained model. Image will be saved in SAVE_PATH with name sample_style.png.
python aae_mnist.py  --generate --supervise\
  --ncode CODE_DIM \
  --load RESTORE_MODEL_ID

Result

  • The result images are generated by using the same code for each column and the same digit label for each row.
  • When code dimension is 2, we can see each column consists the same style clearly. But for dimension 10, we can hardly read some digits. Maybe there are some issues of implementation or the hyper-parameters are not properly picked, which makes the code still depend on the label.
Code Dim=2 Code Dim=10

4. Semi-supervised learning

Architecture

Architecture Description
The encoder outputs code z as well as the estimated label y. Encoder again takes code z and one-hot label y as input. A Gaussian distribution is imposed on code z and a Categorical distribution is imposed on label y. In this implementation, the autoencoder is trained by semi-supervised classification phase every ten training steps when using 1000 label images and the one-hot label y is approximated by output of softmax.

Hyperparameters

name value
Dimention of z 10
Reconstruction Loss Weight 1.0
Letant z G/D Loss Weight 6.0 / 6.0
Letant y G/D Loss Weight 6.0 / 6.0
Batch Size 128
Max Epoch 250
Learning Rate 1e-4 (initial) / 1e-5 (150 epochs) / 1e-6 (200 epochs)

Usage

  • Training. Summary will be saved in SAVE_PATH.
python aae_mnist.py \
  --ncode 10 \
  --train_semisupervised \
  --lr 2e-4 \
  --maxepoch 250

Result

  • 1280 labels are used (128 labeled images per class)

learning curve for training set (computed only on the training set with labels) train

learning curve for testing set

  • The accuracy on testing set is 97.10% around 200 epochs. valid
You might also like...
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

Re-implememtation of MAE (Masked Autoencoders Are Scalable Vision Learners) using PyTorch.

mae-repo PyTorch re-implememtation of "masked autoencoders are scalable vision learners". In this repo, it heavily borrows codes from codebase https:/

ConvMAE: Masked Convolution Meets Masked Autoencoders
ConvMAE: Masked Convolution Meets Masked Autoencoders

ConvMAE ConvMAE: Masked Convolution Meets Masked Autoencoders Peng Gao1, Teli Ma1, Hongsheng Li2, Jifeng Dai3, Yu Qiao1, 1 Shanghai AI Laboratory, 2 M

Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training [Arxiv] VideoMAE: Masked Autoencoders are Data-Efficient Learne

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

Comments
  • 11 undefined names

    11 undefined names

    flake8 testing of https://github.com/conan7882/adversarial-autoencoders-tf on Python 3.7.0

    $ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./src/models/aae.py:160:29: F821 undefined name 'n_class'
                z = tf.tile(z, [n_class, 1]) # [n_class*n_sample, n_code]
                                ^
    ./src/helper/visualizer.py:43:54: F821 undefined name 'pick_dim'
                    latent_var_list.extend(latent_var[:, pick_dim])
                                                         ^
    ./src/helper/generator.py:65:9: F821 undefined name 'dataflow'
            dataflow.setup(epoch_val=epochs_completed, batch_size=batch_size)
            ^
    ./src/helper/generator.py:65:34: F821 undefined name 'epochs_completed'
            dataflow.setup(epoch_val=epochs_completed, batch_size=batch_size)
                                     ^
    ./src/helper/generator.py:65:63: F821 undefined name 'batch_size'
            dataflow.setup(epoch_val=epochs_completed, batch_size=batch_size)
                                                                  ^
    ./src/dataflow/svhn.py:20:28: F821 undefined name 'file_name'
            data_mat = loadmat(file_name)
                               ^
    ./src/dataflow/svhn.py:21:43: F821 undefined name 'np'
            label_list = data_mat['y'].astype(np.int32)
                                              ^
    ./src/dataflow/svhn.py:22:40: F821 undefined name 'np'
            im_list = data_mat['X'].astype(np.float32)
                                           ^
    ./experiment/vae_mnist.py:176:5: F821 undefined name 'plt'
        plt.figure()
        ^
    ./experiment/vae_mnist.py:177:5: F821 undefined name 'plt'
        plt.imshow(np.squeeze(batch_data['im'][0]))
        ^
    ./experiment/vae_mnist.py:178:5: F821 undefined name 'plt'
        plt.show()
        ^
    11    F821 undefined name 'plt'
    11
    
    opened by cclauss 0
  • Cannot reproduce Fig. 4A from the paper

    Cannot reproduce Fig. 4A from the paper

    Hello there,

    thank you for the great work! I was trying to reproduce the latent space distribution you show in Fig. 4A in the paper, and in section 2 in this repo. After cloning the repository, downloading the data and setting the paths to data and save folders, I run:

    python aae_mnist.py --train --label --dist gmm --ncode 2 --maxepoch 401
    

    and even after 400 epochs what I get in the latent space does not look like your picture, unfortunately. I'm using 10k labelled data, but not much changes if I use the full labels. Any ideas about what I may be missing?

    latent

    I'm using tensorflow 1.13.1, tensorflow_probability 0.6.0, and training on a NVIDIA Tesla K80 GPU. Any help is really appreciated!

    Thank you, Davide

    opened by dpiras 0
  • Matrix size-incompatible

    Matrix size-incompatible

    Hi, Thanks for your sharing! I met an error while running your code with the following command:

    $ python aae_mnist.py --train --ncode 2 --dist_type gaussian
    

    It turns out to be:

    [step: 100] loss: 152.4328 d_loss: 1.9309 g_loss: 3.7884
    [step: 200] loss: 130.0870 d_loss: 1.8414 g_loss: 2.8809
    [step: 300] loss: 118.0664 d_loss: 1.6845 g_loss: 2.2411
    [step: 400] loss: 111.2656 d_loss: 1.5877 g_loss: 1.9333
    ==== epoch: 0, lr:0.0002 ====
    [step: 468] loss: 108.2857 d_loss: 1.5520 g_loss: 1.7980
    [Valid]: [step: 468] loss: 89.4752
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
        return fn(*args)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
        options, feed_dict, fetch_list, target_list, run_metadata)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [2,400], In[1]: [2,1000]
    	 [[Node: AE_1/decoder/decoder_FC/linear1/xw_plus_b/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](AE_1/MultivariateNormalDiag/sample/affine_linear_operator/forward/DistributionShape_1/undo_make_batch_of_event_sample_matrices/rotate_transpose/transpose, AE/decoder/decoder_FC/linear1/weights/read)]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "aae_mnist.py", line 356, in <module>
        train()
      File "aae_mnist.py", line 272, in train
        trainer.valid_epoch(sess, dataflow=valid_data, summary_writer=writer)
      File "../src/helper/trainer.py", line 428, in valid_epoch
        gen_im = sess.run(self._generate_op)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
        run_metadata_ptr)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
        feed_dict_tensor, options, run_metadata)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
        run_metadata)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [2,400], In[1]: [2,1000]
    	 [[Node: AE_1/decoder/decoder_FC/linear1/xw_plus_b/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](AE_1/MultivariateNormalDiag/sample/affine_linear_operator/forward/DistributionShape_1/undo_make_batch_of_event_sample_matrices/rotate_transpose/transpose, AE/decoder/decoder_FC/linear1/weights/read)]]
    
    Caused by op 'AE_1/decoder/decoder_FC/linear1/xw_plus_b/MatMul', defined at:
      File "aae_mnist.py", line 356, in <module>
        train()
      File "aae_mnist.py", line 247, in train
        valid_model.create_generate_model(b_size=400)
      File "../src/models/aae.py", line 172, in create_generate_model
        self.layers['generate'] = (self.decoder(decoder_in) + 1. ) / 2.
      File "../src/models/aae.py", line 206, in decoder
        wd=self._wd, name='decoder_FC', init_w=INIT_W)
      File "../src/models/modules.py", line 35, in decoder_FC
        L.linear(name='linear1', nl=nl)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args
        return func(*args, **current_args)
      File "../src/models/layers.py", line 105, in linear
        act = tf.nn.xw_plus_b(inputs, weights, biases)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 2219, in xw_plus_b
        mm = math_ops.matmul(x, weights)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 2014, in matmul
        a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4279, in mat_mul
        name=name)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
        op_def=op_def)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__
        self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
    
    InvalidArgumentError (see above for traceback): Matrix size-incompatible: In[0]: [2,400], In[1]: [2,1000]
    	 [[Node: AE_1/decoder/decoder_FC/linear1/xw_plus_b/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](AE_1/MultivariateNormalDiag/sample/affine_linear_operator/forward/DistributionShape_1/undo_make_batch_of_event_sample_matrices/rotate_transpose/transpose, AE/decoder/decoder_FC/linear1/weights/read)]]
    
    

    Could you please help me with this problem? Thanks!

    opened by HelloSeeing 1
Owner
Qian Ge
ECE PhD candidate at NCSU
Qian Ge
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 4, 2023
PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-supervised ViT.

MAE for Self-supervised ViT Introduction This is an unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners for self-sup

null 36 Oct 30, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 214 Dec 29, 2022
LBK 26 Dec 28, 2022
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

null 1 Oct 11, 2021
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

null 112 Nov 30, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022