Unrolled Generative Adversarial Networks
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
arxiv:1611.02163
This repo contains an example notebook with a TensorFlow implementation of unrolled GANs on a 2d mixture of Gaussians dataset.
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
arxiv:1611.02163
This repo contains an example notebook with a TensorFlow implementation of unrolled GANs on a 2d mixture of Gaussians dataset.
When I run the code, Key error occurs on 'cur_update_dict = graph_replace(update_dict, cur_update_dict)' line in the for-loop. How can I fix it ?! I use python3.5, tensorflow 1.0.
I'm unable to reproduce mode collapse without unrolling operation, an experiment result mentioned in Appendix A, B in the paper. What are the network and training configurations required to reproduce the mode collapse problem?
Thanks,
Robin
Hi,
Thanks for the nice example. I have a question when you said TensorFlow's built-in optimizers use custom C++ code for efficiency, and do not construct a symbolic graph that is differentiable. For this notebook, we use the optimization routines from keras to compute update.
It means that you have to use a keras optimizer to keep the graph differentiable (whereas tensorflow optimizer or tf.gradient
cannot)?
Thanks!
One has to modify the notebook in several places:
sample_n(num)
calls should be replaced with sample(num)
tf.nn.sigmoid_cross_entropy_with_logits
must be supplied with explicit keyword arguments logits
and labels
.Hi, I try to run the code on tf=1.15 but I get an error in the construct model and training ops section
code: `tf.reset_default_graph()
data = sample_mog(params['batch_size'])
noise = ds.Normal(tf.zeros(params['z_dim']), tf.ones(params['z_dim'])).sample(params['batch_size'])
with slim.arg_scope([slim.fully_connected], weights_initializer=tf.orthogonal_initializer(gain=1.4)): samples = generator(noise, output_dim=params['x_dim']) real_score = discriminator(data) fake_score = discriminator(samples, reuse=True)
loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=real_score, labels=tf.ones_like(real_score)) + tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_score, labels=tf.zeros_like(fake_score)))
gen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "generator") disc_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "discriminator")
d_opt = Adam(lr=params['disc_learning_rate'], beta_1=params['beta1'], epsilon=params['epsilon']) updates = d_opt.get_updates(disc_vars, [], loss) d_train_op = tf.group(*updates, name="d_train_op")
if params['unrolling_steps'] > 0:
update_dict = extract_update_dict(updates)
cur_update_dict = update_dict
for i in xrange(params['unrolling_steps'] - 1):
cur_update_dict = graph_replace(update_dict, cur_update_dict)
unrolled_loss = graph_replace(loss, cur_update_dict)
else: unrolled_loss = loss
g_train_opt = tf.train.AdamOptimizer(params['gen_learning_rate'], beta1=params['beta1'], epsilon=params['epsilon']) g_train_op = g_train_opt.minimize(-unrolled_loss, var_list=gen_vars)`
Error details:
NameErrorTraceback (most recent call last)
in () 27 if params['unrolling_steps'] > 0: 28 # Get dictionary mapping from variables to their update value after one optimization step ---> 29 update_dict = extract_update_dict(updates) 30 cur_update_dict = update_dict 31 for i in xrange(params['unrolling_steps'] - 1):
in extract_update_dict(update_ops) 19 updates[var.value()] = var + value 20 else: ---> 21 raise ValueError("Update op type (%s) must be of type Assign or AssignAdd"%update_op.op.type) 22 return updates NameError: global name 'update_op' is not defined
How can I solve this error?
Is there any way to migrate this code to TensorFlow 2.0+? It seems like in V2 we no longer have access to the graph_replace function.
Also, in V2's keras implementation, the optimizer.get_updates() method only accepts 2 arguments as opposed to 3 (loss, variables), and appears to be virtual, since it breaks with a "no gradients exist error" when trying to call it.
Hi,
I noticed the first mixture has a higher density mass. I think this is due to the line:
thetas = np.linspace(0, 2 * np.pi, n_mixture)
which includes that location twice (once 0 and once 2pi)
Probably that was not intended.
Hi. I have read your paper. And in equation(12), on the right hand of the equation, the second term, I don't see that in your code. How do you implement the derivative of thetaD to thetaG? Lin
Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac
Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r
NR-GAN: Noise Robust Generative Adversarial Networks (CVPR 2020) This repository provides PyTorch implementation for noise robust GAN (NR-GAN). NR-GAN
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p
AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe
alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation
This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as
DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo
TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated
ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary
StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci
Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects
DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM
DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7
AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are
Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation
Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative
Mutli-agent task allocation This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams. To change