PyTorch implementation of "Image-to-Image Translation Using Conditional Adversarial Networks".

Overview

pix2pix-pytorch

PyTorch implementation of Image-to-Image Translation Using Conditional Adversarial Networks.

Based on pix2pix by Phillip Isola et al.

The examples from the paper:

examples

Prerequisites

  • Linux
  • Python, Numpy, PIL
  • pytorch 0.4.0
  • torchvision 0.2.1

Getting Started

  • Clone this repo:

    git clone [email protected]:mrzhu-cool/pix2pix-pytorch.git cd pix2pix-pytorch

  • Get dataset

    unzip dataset/facades.zip

  • Train the model:

    python train.py --dataset facades --cuda

  • Test the model:

    python test.py --dataset facades --cuda

Acknowledgments

This code is a simple implementation of pix2pix. Easier to understand. Note that we use a downsampling-resblocks-upsampling structure instead of the unet structure in this code, therefore the results of this code may inconsistent with the results presented in the paper.

Highly recommend the more sophisticated and organized code pytorch-CycleGAN-and-pix2pix by Jun-Yan Zhu.

Comments
  • Minor changes

    Minor changes

    Nice work. Regarding not getting similar result check the section 3.4 of the paper. I am made some changes accordingly.

    1. Efficient backward from discriminator while training generator.
    2. The Generator model does not have a batchNorm between encoder and decoder.
    3. Adjusted discriminator such that the receptive field is 70x70. This is what they use in the best performing result in the paper.

    Kind regards, Sagar M. Waghmare

    opened by sagarwaghmare69 5
  • Models' forward are incorrect

    Models' forward are incorrect

    Hey just realized that the model forward/sequence is wrong. E.g Current code: input is (nc) x 256 x 256

    e1 = self.conv1(input)
    # input is (ngf) x 128 x 128
    e2 = self.batch_norm2(self.conv2(self.leaky_relu(e1)))
    

    This should be input is (nc) x 256 x 256

    e1 = self.conv1(input)
    # input is (ngf) x 128 x 128
    e2 = self.conv2(self.leaky_relu(self.batch_norm2(e1)))
    

    Hence both the discriminator and generator models are incorrect.

    opened by sagarwaghmare69 3
  • Fixed saving and volatility

    Fixed saving and volatility

    Thanks for the repo! Two quick improvements that stood out to me:

    1. When you save/load models in pytorch, you should be saving and loading their state_dict, and not the model itself
    2. Rather than setting and resetting requires_grad on all of the parameters, you can use the volatile keyword to get the same effect
    opened by atgambardella 0
  • should I add set_requires_grad(net_d, True/False) for discriminator during training?

    should I add set_requires_grad(net_d, True/False) for discriminator during training?

    Hello, Thank you for your great work. However, I think you should add set_requires_grad(net_d, True/False) for discriminator during training. Is it true?

    modified code:

        # (1) Update D network
        ######################
        set_requires_grad(net_d, True) # add it here
        optimizer_d.zero_grad()        
        # train with fake
        fake_ab = torch.cat((real_a, fake_b), 1)
        pdb.set_trace()
        pred_fake = net_d.forward(fake_ab.detach())
        loss_d_fake = criterionGAN(pred_fake, False)
    
        # train with real
        real_ab = torch.cat((reala, real_b), 1)
        pred_real = net_d.forward(real_ab)
        loss_d_real = criterionGAN(pred_real, True)
        
        # Combined D loss
        loss_d = (loss_d_fake + loss_d_real) * 0.5
    
        loss_d.backward()
       
        optimizer_d.step()
    
        set_requires_grad(net_d, False) # add it here
    
        ######################
    

    I am looking forward to hearing from you. Thank you in advance!

    opened by vince2003 0
  • Input and output sizes of images?

    Input and output sizes of images?

    @mrzhu-cool Why do I find that the output size will be smaller in the training results? How can I modify the code to customize the output size or the same size as the input image...

    opened by ArtScanner 0
  • Consider only L1 loss

    Consider only L1 loss

    I want to consider only L1 loss.Is it correct to comment out the generator loss as

    #loss_g = loss_g_gan + loss_g_l1 and consider only, loss_g = loss_g_l1

    Do I need to change anything in discriminator part?

    Thanks in advance

    opened by riktimmondal 3
  • PatchGAN part in the implementation

    PatchGAN part in the implementation

    Hi, I had hard time finding the PatchGAN part of code,Although you have commented before writing discriminator.Can you please point out where is it exactly there and how are you using it? Thanks

    opened by subham913 1
Owner
mrzhu
day day up
mrzhu
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 7, 2023
Fang Zhonghao 13 Nov 19, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022