A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Overview

Mixup: Beyond Empirical Risk Minimization in PyTorch

This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The code is adapted from PyTorch CIFAR.

The results:

I only tested using CIFAR 10 and CIFAR 100. The network we used is PreAct ResNet-18. For mixup, we set alpha to be default value 1, meaning we sample the weight uniformly between zero and one. I trained 200 epochs for each setting. The learning rate is 0.1 (iter 1-100), 0.01 (iter 101-150) and 0.001 (iter 151-200). The batch size is 128.

Dataset and Model Acc.
CIFAR 10 no mixup 94.97%
CIFAR 10 mixup 95.53%
CIFAR 100 no mixup 76.53%
CIFAR 100 mixup 77.83%

CIFAR 10 test accuracy evolution

cifar10

CIFAR 100 test accuracy evolution

cifar100

Usage

# Train and test CIFAR 10 with mixup.
python main_cifar10.py --mixup --exp='cifar10_nomixup'
# Train and test CIFAR 10 without mixup.
python main_cifar10.py --exp='cifar10_nomixup'
# Train and test CIFAR 100 with mixup.
python main_cifar100.py --mixup --exp='cifar100_mixup'
# Train and test CIFAR 100 without mixup.
python main_cifar100.py --exp='cifar100_nomixup'
You might also like...
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation

An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation (ICCV 2021) Introduction This is an official pytorch implemen

A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

Official Pytorch implementation of
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
PyTorch implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

Simple PyTorch Implementation of "Grokking" Implementation of Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets Usage Running

Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

Code Repo for the ACL21 paper
Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"

Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning This is the Github repository of our paper, "Common S

Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'

Re-implementation of the paper 'Grokking: Generalization beyond overfitting on small algorithmic datasets' Paper Original paper can be found here Data

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Code in both PyTorch and TensorFlow

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Comments
  • weight decay

    weight decay

    Hi Harry,

    Thanks for your implementation of our work!

    Because mixup has a strong regularization effect, the best weight decay parameter for mixup is usually smaller than normal training. For example, you should see mixup accuracy improves if the weight decay is set to 1e-4, which is what we used in the paper.

    We didn't search for the best weight decay though, so other choices may lead to further improvement.

    Best,

    opened by hongyi-zhang 1
  • There is a mistake in shuffle_minibatch function

    There is a mistake in shuffle_minibatch function

    Hello. Thank you for your share.

    First of all, My English is not so fluent. Please understand it.

    There is a mistake in shuffle_minibatch function in main_cifar10.py, main_cifar100.py

    def shuffle_minibatch(inputs, targets, n_class=88, mixup=True):
        """Shuffle a minibatch and do linear interpolation between images and labels.
        Args:
            inputs: a numpy array of images with size batch_size x H x W x 3.
            targets: a numpy array of labels with size batch_size x 1.
            mixup: a boolen as whether to do mixup or not. If mixup is True, we
                sample the weight from beta distribution using parameter alpha=1,
                beta=1. If mixup is False, we set the weight to be 1 and 0
                respectively for the randomly shuffled mini-batches.
        """
    

    According to your comment, inputs parameter shape is batch_size x H x W x 3. but your input shape is batch_sizex 3 x H x W, isn't it?

    I think you change comment

    Thank you for your work again.

    opened by tmdrn9 0
  • There is a typo in Readme.md file

    There is a typo in Readme.md file

    Hello.

    Thank you for your nice implementation and share.

    There is a typo in Usage section in readme.md file.

    #Train and test CIFAR 10 with mixup.
    python main_cifar10.py --mixup --exp='cifar10_nomixup'
    

    I think you intended 'cifar10_mixup' for exp name.

    Thank you for your work again.

    opened by Neonsign247 0
Owner
Harry Yang
Harry Yang
Tilted Empirical Risk Minimization (ICLR '21)

Tilted Empirical Risk Minimization This repository contains the implementation for the paper Tilted Empirical Risk Minimization ICLR 2021 Empirical ri

Tian Li 40 Nov 28, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 3, 2023
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
ICLR21 Tent: Fully Test-Time Adaptation by Entropy Minimization

⛺️ Tent: Fully Test-Time Adaptation by Entropy Minimization This is the official project repository for Tent: Fully-Test Time Adaptation by Entropy Mi

Dequan Wang 204 Dec 25, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 8, 2022
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022