Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Related tags

Deep Learning GCC
Overview

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) (Link)

Overview

overview

Prerequisites

  • Linux
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Please type the command

    pip install -r requirements.txt

    to install dependencies.

Data preparation

  • cityscapes
  • horse2zebra
  • celeb
  • Coco, Set5, Set14, B100, Urban100

Pretrained Model

We provide a list of pre-trained models in link.

Pre-Training For Pruning

  • Run the following script to pretrain a pix2pix on cityscapes dataset for generator pruning, all scripts for sagan, cyclegan, pix2pix, srgan can be found in ./scripts

    bash scripts/pix2pix/pretrain_for_pruning.sh

Training

  • train lightweight generator using GCC

    bash scripts/pix2pix/train.sh

Testing

  • test resulted models, FID or mIoU will be calculated, take pix2pix generator on cityscapes dataset as an example

    bash scripts/pix2pix/test.sh

Acknowledgements

Our code is developed based on DMAD, Self-Attention-GAN, pytorch-CycleGAN-and-pix2pix, a-PyTorch-Tutorial-to-Super-Resolution.

Comments
  • Request for Pretrained Model of SRGAN

    Request for Pretrained Model of SRGAN

    Hi.

    Do you have a plan to release the model and the pruned one of SRGAN which are trained for 130 epochs that the authors use pretrain_for_pruning.sh and train.sh ???

    Thanks!

    opened by kminsoo 7
  • CelebA Dataset

    CelebA Dataset

    Hi, thanks for your nice repository. I have two questions:

    1. I could not find the dataset downloading and partitioning codes in the [datasets](https://github.com/SJLeo/GCC/tree/main/datasets) directory for the CelebA dataset. I manually downloaded it, and it has train, valid, and test in its PyTorch implementation. The code here anticipates the directory ./dataset/celeb/train. I would appreciate it if you elaborate on the possible connection between the partitioning schemes.
    2. I ran the training and testing codes for bash commands for SAGAN and it seems that the phase argument is train in both settings. Am I doing something wrong or is it what it should be as the dataset will be set to train in both situations in this line of the code in the current setting.

    Thank you in advance.

    opened by Alii-Ganjj 5
  • Will model be further enhanced efficiency with separable convolution?

    Will model be further enhanced efficiency with separable convolution?

    Thank you for your great work.

    I am wondering if model can be further enhanced efficiency with resnet? Because the resnet in pixel2pixel in your codes are separable convolution from efficientNet. Did you ever test it with your method?

    Looking forward to your reply.

    opened by kasim0226 1
  • reducing the memory usage of get_real_stat.py

    reducing the memory usage of get_real_stat.py

    @SJLeo hi there,

    get_real_stat.py requires alot of memory when using a large dataset, the solution could be by storing the image tensor on the disk and then modifying the dataloader to restore the tensors from the disk.

    This issue was discussed in here

    opened by seekingdeep 1
  • Question: why doing inference at val-set when using darts_discriminator

    Question: why doing inference at val-set when using darts_discriminator

    In train.py, line 147: if opt.darts_discriminator and model.teacher_model is not None: val_data = next(val_dataloader) model.set_input(val_data) model.clipping_mask_alpha() model.optimizer_netD_arch() Why do you inference on val-set when using darts_discriminator? And in this situation, length of val should equal to length of train, so it will cause a bug

    opened by HuaZheLei 1
  • Cyclegan_binarysearch_cfg not working.

    Cyclegan_binarysearch_cfg not working.

    Thank you for your great work.

    Unfortunately, I tried to pruning after pretraining cyclegan, but in the cyclegan_binarysearch_cfg code, the number of channels in on layer became 0 and an error occurred. May I know how to solve it? And after setting cfg_AtoB in cyclegna_binarysearch_cfg code, cfg_AtoB is manually set again on the next line, is there any reason to do this?

    Thanks

    opened by DJun0003 1
Owner
Shaojie Li
Shaojie Li
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Code for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"

Triple-cooperative Video Shadow Detection Code and dataset for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"[arXiv link] [official l

Zhihao Chen 24 Oct 4, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(εˆ˜ζ²›δΈœ) 54 Dec 17, 2022
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 4, 2022
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 4, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Princeton Vision & Learning Lab 115 Jan 4, 2023
[ACL-IJCNLP 2021] Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

CLNER The code is for our ACL-IJCNLP 2021 paper: Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning CLNER is a

null 71 Dec 8, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation

StyleGAN2 with adaptive discriminator augmentation (ADA) β€” Official TensorFlow implementation Training Generative Adversarial Networks with Limited Da

NVIDIA Research Projects 1.7k Dec 29, 2022
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Runsheng Xu 322 Dec 23, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

James Park 1 Dec 11, 2021
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

null 62 Dec 30, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022