BEGAN in PyTorch

Overview

BEGAN in PyTorch

This project is still in progress. If you are looking for the working code, use BEGAN-tensorflow.

Requirements

Usage

First download CelebA datasets with:

$ apt-get install p7zip-full # ubuntu
$ brew install p7zip # Mac
$ python download.py

or you can use your own dataset by placing images like:

data
└── YOUR_DATASET_NAME
    ├── xxx.jpg (name doesn't matter)
    ├── yyy.jpg
    └── ...

To train a model:

$ python main.py --dataset=CelebA --num_gpu=1
$ python main.py --dataset=YOUR_DATASET_NAME --num_gpu=4 --use_tensorboard=True

To test a model (use your load_path):

$ python main.py --dataset=CelebA --load_path=./logs/CelebA_0405_124806 --num_gpu=0 --is_train=False --split valid

Results

alt tag

(in progress)

Author

Taehoon Kim / @carpedm20

Comments
  • Added autoencoded image output when run in test mode

    Added autoencoded image output when run in test mode

    Running in test mode was not producing any output, so I added an initial implementation of Trainer.test(). This simply autoencodes a batch of inputs, and saves the inputs as well as the autoencoded outputs.

    This can be combined with '--split valid' to test autoencoding on validation data. To make these tests more repeatable, shuffling was disabled in test mode and the images are sort ordered.

    Sample output attached. validation inputs autoencoded

    opened by dribnet 2
  • optimize D and G independently

    optimize D and G independently

    In the BEGAN paper ,it says:"Implementation note: while the updates are made simultaneously, they are still adversarial. As such, it is important to optimize θD and θG independently with respect to their corresponding losses". Maybe this will get better results?

    opened by ggsonic 1
  • Add support for CelebA test/train split

    Add support for CelebA test/train split

    The current training routine is using all of the CelebA images, including the canonical CelebA validation and test data. This change adds support for CelebA splits and by default only trains on only the CelebA training images.

    This was done by moving the downloaded data into an "images" subdirectory, and then creating subdirectories of splits with relative symlinks to the original files. This format required only light changes to the config and data_loader.

    Note that honoring the standard splits in generative modeling is important for a few reasons. Perhaps most importantly, it allows for fair comparisons across papers when visually inspecting image samples; it is unfair to compare models trained on 200k images versus those only exposed to a 160k subset. But also relevant for models with an encoder like BEGAN - it allows one to evaluate a trained model by performing reconstructions on out of sample validation data.

    opened by dribnet 1
  • The Generator can't  generate a face, why?

    The Generator can't generate a face, why?

    Excuse me,I try to run the code of BEGAN-pytorch, but the generate images are a mess ,like this: result_200K

    Could u tell me the reason? Must I use the BEGAN-tensorflow to get the good result? thanks!

    opened by Ricelll 0
  • forward() takes at least 3 arguments (2 given)

    forward() takes at least 3 arguments (2 given)

    Hi, @dribnet @carpedm20 @scott-vsi

    I met this error:

    ctilab@ctilab:~/BEGAN-pytorch$ python main.py --dataset=CelebA --num_gpu=1 --use_tensorboard=True Found 162770 images in subfolders of: data/CelebA/splits/train [*] MODEL dir: logs/CelebA_1008_173315 [*] PARAM path: logs/CelebA_1008_173315/params.json 0%| | 0/500000 [00:00<?, ?it/s]/usr/local/lib/python2.7/dist-packages/torch/nn/modules/upsampling.py:135: UserWarning: nn.UpsamplingNearest2d is deprecated. Use nn.Upsample instead. warnings.warn("nn.UpsamplingNearest2d is deprecated. Use nn.Upsample instead.")

    Traceback (most recent call last): File "main.py", line 42, in main(config) File "main.py", line 34, in main trainer.train() File "/home/ctilab/BEGAN-pytorch/trainer.py", line 160, in train d_loss_real = l1(AE_x, x) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 224, in call result = self.forward(*input, **kwargs) File "/home/ctilab/BEGAN-pytorch/models.py", line 117, in forward return backend_fn(self.size_average)(input, target) TypeError: forward() takes at least 3 arguments (2 given) ctilab@ctilab:~/BEGAN-pytorch$

    What's wrong with me?

    opened by bemoregt 0
  • numpy.core.multiarray failed to import

    numpy.core.multiarray failed to import

    Hi, @dribnet @carpedm20 @scott-vsi

    I met this error:

    ctilab@ctilab:~/BEGAN-pytorch$ python main.py --dataset=images --num_gpu=0 RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa Traceback (most recent call last): File "main.py", line 1, in import torch File "/usr/local/lib/python2.7/dist-packages/torch/init.py", line 53, in from torch._C import * ImportError: numpy.core.multiarray failed to import

    What's wrong with me?

    opened by bemoregt 3
  • One suggestion may be useful for learning

    One suggestion may be useful for learning

    Hi! Refering your code, I make below changes in my code.

    1 2 3

    I change the way of the initializing at first. Then, I change the way to get the loss. In my code, the both part of the g_loss would be used for learning G's parameters, but in your code it maybe only the AE_G be used for learning G's parameters. I'm not sure about this, but maybe you can try it. And it's not big changes. I'm sorry for my pool English~

    opened by Tyhye 4
  • explosion at 14000/500000

    explosion at 14000/500000

    Not sure exactly what occurred, but while training with your code with default parameters and tensorboard enabled, the training lost it at 14000/500000.

    beganoops beganoops2

    The network then began outputting completely black images for all three _D, _D, and _D_fake.

    Not much more insight from my end I am afraid. Trained on a Titan XP in ubuntu with pytorch 0.1.10 - py27_1cu80 [cuda80] - soumith, torchvision 0.1.6, and python 2.7.13 on ubuntu 16.04.

    Loss_D went from 0.0436 to 1.45, and L_x went from 0.0436 to 1.522

    opened by frolf 7
Owner
Taehoon Kim
ex OpenAI
Taehoon Kim
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 7, 2023
Fang Zhonghao 13 Nov 19, 2022