This repository contains all source code, pre-trained models related to the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator"

Overview

An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator

This is a Pytorch implementation for the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator".

Requirement

  • python 3.7.3
  • pytorch 1.2.0
  • tensorflow 2.0.0
  • torchtext 0.4.0
  • torchvision 0.4.0
  • mnist

Data preparation

Training

  • Run 1_train.sh to train our proposed loss function RMCosGAN along with other loss functions on four datasets.

Appendix

Network Architectures

DCGAN Architecture for CIFAR-10, MNIST and STL-10 datasets

Operation Filter Units Non Linearity Normalization
Generator G(z)
Linear 512 None None
Trans.Conv2D 256 ReLU Batch
Trans.Conv2D 128 ReLU Batch
Trans.Conv2D 64 ReLU Batch
Trans.Conv2D 3 Tanh None
Discriminator D(x)
Conv2D 64 Leaky-ReLU Spectral
Conv2D 64 Leaky-ReLU Spectral
Conv2D 128 Leaky-ReLU Spectral
Conv2D 128 Leaky-ReLU Spectral
Conv2D 256 Leaky-ReLU Spectral
Conv2D 256 Leaky-ReLU Spectral
Conv2D 512 Leaky-ReLU Spectral

DCGAN Architecture for CAT dataset

Operation Filter Units Non Linearity Normalization
Generator G(z)
Trans.Conv2D 1024 ReLU Batch
Trans.Conv2D 512 ReLU Batch
Trans.Conv2D 256 ReLU Batch
Trans.Conv2D 128 ReLU Batch
Trans.Conv2D 3 Tanh None
Discriminator D(x)
Conv2D 128 Leaky-ReLU Spectral
Conv2D 256 Leaky-ReLU Spectral
Conv2D 512 Leaky-ReLU Spectral
Conv2D 1024 Leaky-ReLU Spectral

Experimental results

60 randomly-generated images with RMCosGAN at FID=31.34 trained on CIFAR-10 dataset

60 randomly-generated images with RMCosGAN at FID=13.17 trained on MNIST dataset

60 randomly-generated images with RMCosGAN FID=52.16 trained on STL-10 dataset

60 randomly-generated images with RMCosGAN at FID=9.48 trained on CAT dataset

Citation

Please cite our paper if RMCosGAN is used:

@article{RMCosGAN,
  title={An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator},
  author={Cuong Nguyen, Tien-Dung Cao, Tram Truong-Huu, Binh T.Nguyen},
  journal={},
  year={}
}

If this implementation is useful, please cite or acknowledge this repository on your work.

Contact

Cuong Nguyen ([email protected]),

Tien-Dung Cao ([email protected]),

Tram Truong-Huu ([email protected]),

Binh T.Nguyen ([email protected])

You might also like...
This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Ever felt tired after preprocessing the dataset, and not wanting to write any code further to train your model? Ever encountered a situation where you wanted to record the hyperparameters of the trained model and able to retrieve it afterward? Models Playground is here to help you do that. Models playground allows you to train your models right from the browser. Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

This package contains deep learning models and related scripts for RoseTTAFold

RoseTTAFold This package contains deep learning models and related scripts to run RoseTTAFold This repository is the official implementation of RoseTT

Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Comments
  • Error parsing message with type 'tensorflow.GraphDef'

    Error parsing message with type 'tensorflow.GraphDef'

    Use tf.gfile.GFile. Traceback (most recent call last): File "train.py", line 563, in metric_fid.create_inception_graph(str(param.dir_inception)) File "/media/sang/UBUNTU/CLASSIFICATION/RMCosGAN-master/metric_fid.py", line 39, in create_inception_graph graph_def.ParseFromString( f.read()) google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef'

    opened by doansangg 0
Owner
Cuong Nguyen
AI/DL researcher
Cuong Nguyen
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 7, 2023
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 2, 2022
Source code for paper: Knowledge Inheritance for Pre-trained Language Models

Knowledge-Inheritance Source code paper: Knowledge Inheritance for Pre-trained Language Models (preprint). The trained model parameters (in Fairseq fo

THUNLP 31 Nov 19, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this rep

Facebook Research 192 Dec 23, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 4, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 8, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022