Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Overview

Frequency Bias of Generative Models

Generator Testbed Discriminator Testbed

This repository contains official code for the paper On the Frequency Bias of Generative Models.

You can find detailed usage instructions for analyzing standard GAN-architectures and your own models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2021NEURIPS,
  title = {On the Frequency Bias of Generative Models},
  author = {Schwarz, Katja and Liao, Yiyi and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation

Please note, that this repo requires one GPU for running. First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called fbias using

conda env create -f environment.yml
conda activate fbias

Generator Testbed

You can run a demo of our generator testbed via:

chmod +x ./scripts/demo_generator_testbed.sh
./scripts/demo_generator_testbed.sh

This will train the Generator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/generator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a generator architecture you can train a model by running

python generator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/generator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_generator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/generator_testbed/*EXPERIMENT_NAME*/eval.

Discriminator Testbed

You can run a demo of our discriminator testbed via:

chmod +x ./scripts/demo_discriminator_testbed.sh
./scripts/demo_discriminator_testbed.sh

This will train the Discriminator of Progressive Growing GAN to regress a single image. Further, the training progression on the image regression, spectrum, and spectrum error are summarized in output/discriminator_testbed/baboon64/pggan/eval.

In general, to analyze the spectral properties of a discriminator architecture you can train a model by running

python discriminator_testbed.py *EXPERIMENT_NAME* *PATH/TO/CONFIG*

This script should create a folder output/discriminator_testbed/*EXPERIMENT_NAME* where you can find the training progress. To evaluate the spectral properties of the trained model run

python eval_discriminator.py *EXPERIMENT_NAME* --psnr --image-evolution --spectrum-evolution --spectrum-error-evolution

This will print the average PSNR of the regressed images and visualize image evolution, spectrum evolution, and spectrum error evolution in output/discriminator_testbed/*EXPERIMENT_NAME*/eval.

Datasets

Toyset

You can generate a toy dataset with Gaussian peaks as spectrum by running

cd data
python toyset.py 64 100
cd ..

This creates a folder data/toyset/ and generates 100 images of resolution 64x64 pixels.

CelebA-HQ

Download celebA_hq. Then, update data:root: *PATH/TO/CELEBA_HQ* in the config file.

Other datasets

The config setting data:root: *PATH/TO/DATA* needs to point to a folder with the training images. You can use any dataset which follows the folder structure

*PATH/TO/DATA*/xxx.png
*PATH/TO/DATA*/xxy.png
...

By default, the images are center-cropped and optionally resized to the resolution specified in the config file underdata:resolution. Note, that you can also use a subset of images via data:subset.

Architectures

StyleGAN Support

In addition to Progressive Growing GAN, this repository supports analyzing the following architectures

For this, you need to initialize the stylegan3 submodule by running

git pull --recurse-submodules
cd models/stylegan3/stylegan3
git submodule init
git submodule update
cd ../../../

Next, you need to install any additional requirements for this repo. You can do this by running

conda activate fbias
conda env update --file environment_sg3.yml --prune

You can now analyze the spectral properties of the StyleGAN architectures by running

# StyleGAN2
python generator_testbed.py baboon64/StyleGAN2 configs/generator_testbed/sg2.yaml
python discriminator_testbed.py baboon64/StyleGAN2 configs/discriminator_testbed/sg2.yaml
# StyleGAN3
python generator_testbed.py baboon64/StyleGAN3 configs/generator_testbed/sg3.yaml

Other architectures

To analyze any other network architectures, you can add the respective model file (or submodule) under models. You then need to write a wrapper class to integrate the architecture seamlessly into this code base. Examples for wrapper classes are given in

  • models/stylegan2_generator.py for the Generator
  • models/stylegan2_discriminator.py for the Discriminator

Further Information

This repository builds on Lars Mescheder's awesome framework for GAN training. Further, we utilize code from the Stylegan3-repo and GenForce.

You might also like...
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Companion code for the paper "An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence" (NeurIPS 2021)

ReLU-GP Residual (RGPR) This repository contains code for reproducing the following NeurIPS 2021 paper: @inproceedings{kristiadi2021infinite, title=

This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaƫl Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Code for
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

Owner
null
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 1, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Official implementation of Generalized Data Weighting via Class-level Gradient Manipulation (NeurIPS 2021).

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

null 9 Nov 3, 2021
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

null 71 Dec 14, 2022