PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

Overview

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

This is the official repository of PRIME, the data agumentation method introduced in the paper: "PRIME: A Few Primitives Can Boost Robustness to Common Corruptions". PRIME is a generic, plug-n-play data augmentation scheme that consists of simple families of max-entropy image transformations for conferring robustness to common corruptions. PRIME leads to significant improvements in corruption robustness on multiple benchmarks.

Pre-trained models

We provide different models trained with PRIME on CIFAR-10/100 and ImageNet datasets. You can download them from here.

Setup

This code has been tested with Python 3.8.5 and PyTorch 1.9.1. To install required dependencies run:

$ pip install -r requirements.txt

For corruption robustness evaluation, download and extract the CIFAR-10-C, CIFAR-100-C and ImageNet-C datasets from here.

Usage

We provide a script train.py for PRIME training on CIFAR-10/100, ImageNet-100 and ImageNet. For example, to train a ResNet-50 network on ImageNet with PRIME, run:

$ python -u train.py --config=config/imagenet_cfg.py \
    --config.save_dir=<save_dir> \
    --config.data_dir=<data_dir> \
    --config.cc_dir=<common_corr_dir> \
    --config.use_prime=True

Detailed configuration options can be found in config.

Results

Results on ImageNet/ImageNet-100 with a ResNet-50/ResNet-18 (†: without JSD loss)

Dataset Method   Clean (↑) CC Acc (↑)    mCE (↓)
ImageNet Standard 76.1 38.1 76.1
ImageNet AugMix 77.5 48.3 65.3
ImageNet DeepAugment 76.7 52.6 60.4
ImageNet PRIME† 77.0 55.0 57.5
ImageNet-100 Standard 88.0 49.7 100
ImageNet-100 AugMix 88.7 60.7 79.1
ImageNet-100 DeepAugment 86.3 67.7 68.1
ImageNet-100 PRIME 85.9 71.6 61.0

Results on CIFAR-10/100 with a ResNet-18

Dataset    Method            Clean (↑) CC Acc (↑)    mCE (↓)
CIFAR-10 Standard 95.0 74.0 24.0
CIFAR-10 AugMix 95.2 88.6 11.4
CIFAR-10 PRIME 93.1 89.0 11.0
CIFAR-100 Standard 76.7 51.9 48.1
CIFAR-100 AugMix 78.2 64.9 35.1
CIFAR-100 PRIME 77.6 68.3 31.7

Citing this work

@article{PRIME2021,
    title = {PRIME: A Few Primitives Can Boost Robustness to Common Corruptions}, 
    author = {Apostolos Modas and Rahul Rade and Guillermo {Ortiz-Jim\'enez} and Seyed-Mohsen {Moosavi-Dezfooli} and Pascal Frossard},
    year = {2021},
    journal = {arXiv preprint arXiv:2112.13547}
}
You might also like...
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

T-Few This repository contains the official code for the paper: "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learni

Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code

Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.

Consistency Regularization for Adversarial Robustness
Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech Keon Lee, Ky

This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

This is the dataset for testing the robustness of various VO/VIO methods
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

Comments
  • Port to Kornia

    Port to Kornia

    Hi @amodas , just encountered your paper and it appears to be a very interesting augmentation.

    I am wondering if you are interested in porting this code into Kornia? We are the maintainers of that project and hope to have this feature.

    Let me know if anything!

    /cc @edgarriba @ducha-aiki

    opened by shijianjian 2
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 8, 2023
Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions"

ModelNet-C Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions". For the latest updates, see: sites.google.com

Jiawei Ren 45 Dec 28, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 9, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile

알고리즘 스터디 ?? 부스트캠프 웹모바일 6기 iOS 10조의 알고리즘 스터디 입니다. 개인적인 사정 등으로 S034, S055만 참가하였습니다. 스터디 목적 상진: 코테 합격 + 부캠끝나고 아침에 일어나기 위해 필요한 사이클 기완: 꾸준하게 자리에 앉아 공부하기 +

null 2 Jan 11, 2022
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

null 4 Aug 2, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022