DCGAN LSGAN WGAN-GP DRAGAN PyTorch

Overview

Recommendation

  • Our GAN based work for facial attribute editing - AttGAN.

News

  • 8 April 2019: We re-implement these GANs by Tensorflow 2! The old version is here: v1 or in the "v1" directory.
  • PyTorch Version


GANs - Tensorflow 2

Tensorflow 2 implementations of DCGAN, LSGAN, WGAN-GP and DRAGAN.

Exemplar results

Fashion-MNIST

DCGAN LSGAN WGAN-GP DRAGAN

CelebA

DCGAN LSGAN
WGAN-GP DRAGAN

Anime

WGAN-GP DRAGAN

Usage

  • Environment

    • Python 3.6

    • TensorFlow 2.2, TensorFlow Addons 0.10.0

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the TensorFlow 2.2 environment with commands below

      conda create -n tensorflow-2.2 python=3.6
      
      source activate tensorflow-2.2
      
      conda install scikit-image tqdm tensorflow-gpu=2.2
      
      conda install -c conda-forge oyaml
      
      pip install tensorflow-addons==0.10.0
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate tensorflow-2.2
  • Datasets

  • Examples of training

    • Fashion-MNIST DCGAN

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=fashion_mnist --epoch=25 --adversarial_loss_mode=gan
    • CelebA DRAGAN

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=celeba --epoch=25 --adversarial_loss_mode=gan --gradient_penalty_mode=dragan
    • Anime WGAN-GP

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=anime --epoch=200 --adversarial_loss_mode=wgan --gradient_penalty_mode=wgan-gp --n_d=5
    • see more training exampls in commands.sh

    • tensorboard for loss visualization

      tensorboard --logdir ./output/fashion_mnist_gan/summaries --port 6006
Comments
  • GPU is full

    GPU is full

    Hello, when the code runs, the memory is full. What happened? My python version is 3.6, tensorflow version is 1.11, my GPU is 1080ti, thanks! the error is as follow: An error ocurred while starting the kernel 2019󈚥󈚦 08:40:45.229298: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019󈚥󈚦 08:40:45.601632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582 pciBusID: 0000:65:00.0 totalMemory: 11.00GiB freeMemory: 9.10GiB 2019󈚥󈚦 08:40:45.603929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0 2019󈚥󈚦 08:40:46.556261: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix: 2019󈚥󈚦 08:40:46.558110: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 2019󈚥󈚦 08:40:46.558407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N 2019󈚥󈚦 08:40:46.558836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8789 MB memory) ‑> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)

    opened by lixingbao 9
  • About problem with generating image size

    About problem with generating image size

    Hello,can your code only generate 64×64 images? Can I generate an image of the specified size? For example: 256 × 256, if you can, what parameters need to be modified?thank you!

    opened by lixingbao 6
  • WGAN-GP does not work!!!

    WGAN-GP does not work!!!

    I have updated the code from TensorFlow 2.0-alpha to TensorFlow 2.0, everything works well except for WGAN-GP (it works in tf2.0-alpha). In tf2.0, The gradient penalty seems very unstable, but I cannot find out the problem. Does anybody help? I will be grateful.

    help wanted 
    opened by LynnHo 3
  • how about 3D data?

    how about 3D data?

    Hi!

    cartoon faces original size is [96, 96, 3],the number 3 means 3 channel RGB data. But if I have grayscale data with 3 slices, i.e the size is [121,145,3], Can I simply use this code? If not, what should I change based on this code?

    Thanks for your work! Look forward to your response.

    opened by KakaVlasic 3
  • c_iter isn't used

    c_iter isn't used

    c_iter is defined but not used in all of the WGAN files. What is the correct behaviour? i.e should the critic be optimised heavily initially or not?

    Also, can you confirm that you use a learning rate parameter of 0.0002, regardless of whether RMSProp or Adam is used as the optimiser?

    opened by davidADSP 3
  • About n_critic = 5

    About n_critic = 5

    i use the code whitch be used to train cartoon pictures with WGAN-GP. i don't know what the mean of n_critic = 5 , and why do you to set it. thanks.

    opened by tuoniaoren 3
  • Error while using celeba dataset

    Error while using celeba dataset

    I am getting this error while running train.py TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string. Please help with this. Thanks in advance

    opened by yksolanki9 2
  • Where do you freeze the gradient descent?

    Where do you freeze the gradient descent?

    Hello, I am confused about how do you freeze the gradient descent to the other model. When training d_step, I suppose the generator should be freezed, as f_logit is based on generator and used in d_loss; similarly, when training g_step, I suppose the discriminator should be freezed, as f_logit depens on discriminator.

    However, I do not see where you stop those gradients flowing to the unwanted part, either generator or discriminator. Would you please provide some hints for me? Thank you.

    opened by ybsave 2
  • do you try to use Resnet in wgan-gp?

    do you try to use Resnet in wgan-gp?

    Have you compared the difference between the network structure of DCGAN and the structure of Resnet in WGAN-GP?Is the effect of Resnet will be better than the structure of DCGAN.

    opened by tuoniaoren 2
  • running question

    running question

    do you meet the program stop without mistakes when the code running for some time ,and the GPU stops work.I changed the value of num_threads。(from 16 to 10)。i run it again.i don't know Is it because the value is too high。

    opened by tuoniaoren 2
  • License

    License

    Hi Zhenliang He, I wonder whether you would be willing to please license this code under an open source license? If so please add a license, or if not please just close this request. Thanks, Connelly

    opened by connellybarnes 2
  • A problem for your DCGAN architecture

    A problem for your DCGAN architecture

    Hi, - Your work is really interesting. But I have found there is a problem for your DCGAN that I didn't understand. You generate noise twice when train discriminator and generator for each iteration, like the blue lines in the following picture. In soumith code (includes some official DCGAN code), he only generate noise once: https://github.com/soumith/dcgan.torch. Could you please tell me the reason?

    image

    opened by RayGuo-C 1
  • NameError: name 'shape' is not defined

    NameError: name 'shape' is not defined

    Traceback (most recent call last): File "D:/github/DCGAN-LSGAN-WGAN-GP-DRAGAN-Tensorflow-2-master/DCGAN-LSGAN-WGAN-GP-DRAGAN-Tensorflow-2-master/train.py", line 91, in G = module.ConvGenerator(input_shape=(1, 1, args.z_dim), output_channels=shape[-1], n_upsamplings=n_G_upsamplings, name='G_%s' % args.dataset) NameError: name 'shape' is not defined please tell me why

    opened by Tonyztj 0
Releases(v1)
Owner
Zhenliang He
Zhenliang He
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

null 46 Nov 9, 2022
Deeper DCGAN with AE stabilization

AEGeAN Deeper DCGAN with AE stabilization Parallel training of generative adversarial network as an autoencoder with dedicated losses for each stage.

Tyler Kvochick 36 Feb 17, 2022
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021