(L2ID@CVPR2021) Boosting Co-teaching with Compression Regularization for Label Noise

Overview

Nested-Co-teaching

(L2ID@CVPR2021) Pytorch implementation of paper "Boosting Co-teaching with Compression Regularization for Label Noise"

[PDF]

If our project is helpful for your research, please consider citing :

@inproceedings{chen2021boosting, 
	  title={Boosting Co-teaching with Compression Regularization for Label Noise}, 
	  author={Chen, Yingyi and Shen, Xi and Hu, Shell Xu and Suykens, Johan AK}, 
	  booktitle={CVPR Learning from Limited and Imperfect Data (L2ID) workshop}, 
	  year={2021} 
	}

Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G), this code has been tested with Pytorch 1.7.1

Table of Content

1. Toy Results

The nested regularization allows us to learn ordered representation which would be useful to combat noisy label. In this toy example, we aim at learning a projection from X to Y with noisy pairs. By adding nested regularization, the most informative recontruction is stored in the first few channels.

Baseline, same MLP Nested200, 1st channel
gif gif
Nested200,first 10 channels Nested200, first 100 channels
gif gif

2. Results on Clothing1M and Animal

Clothing1M [Xiao et al., 2015]

  • We provide average accuracy as well as the standard deviation for three runs (%) on the test set of Clothing1M [Xiao et al., 2015]. Results with “*“ are either using a balanced subset or a balanced loss.
Methods acc@1 result_ref/download
CE 67.2 [Wei et al., 2020]
F-correction [Patrini et al., 2017] 68.9 [Wei et al., 2020]
Decoupling [Malach and Shalev-Shwartz, 2017] 68.5 [Wei et al., 2020]
Co-teaching [Han et al., 2018] 69.2 [Wei et al., 2020]
Co-teaching+ [Yu et al., 2019] 59.3 [Wei et al., 2020]
JoCoR [Wei et al., 2020] 70.3 --
JO [Tanaka et al., 2018] 72.2 --
Dropout* [Srivastava et al., 2014] 72.8 --
PENCIL* [Yi and Wu, 2019] 73.5 --
MLNT [Li et al., 2019] 73.5 --
PLC* [Zhang et al., 2021] 74.0 --
DivideMix* [Li et al., 2020] 74.8 --
Nested* (Ours) 73.1 ± 0.3 model
Nested + Co-teaching* (Ours) 74.9 ± 0.2 model

ANIMAL-10N [Song et al., 2019]

  • We provide test set accuracy (%) on ANIMAL-10N [Song et al., 2019]. We report average accuracy as well as the standard deviation for three runs.
Methods acc@1 result_ref/download
CE 79.4 ± 0.1 [Song et al., 2019]
Dropout [Srivastava et al., 2014] 81.3 ± 0.3 --
SELFIE [Song et al., 2019] 81.8 ± 0.1 --
PLC [Zhang et al., 2021] 83.4 ± 0.4 --
Nested (Ours) 81.3 ± 0.6 model
Nested + Co-teaching (Ours) 84.1 ± 0.1 model

3. Datasets

Clothing1M

To download Clothing1M dataset [Xiao et al., 2015], please refer to here. Once it is downloaded, put it into ./data/. The structure of the file should be:

./data/Clothing1M
├── noisy_train
├── clean_val
└── clean_test

Generate two random Clothing1M noisy subsets for training after unzipping :

cd data/
# generate two random subsets for training
python3 clothing1M_rand_subset.py --name noisy_rand_subtrain1 --data-dir ./Clothing1M/ --seed 123

python3 clothing1M_rand_subset.py --name noisy_rand_subtrain2 --data-dir ./Clothing1M/ --seed 321

Please refer to data/gen_data.sh for more details.

ANIMAL-10N

To download ANIMAL-10N dataset [Song et al., 2019], please refer to here. It includes one training and one test set. Once it is downloaded, put it into ./data/. The structure of the file should be:

./data/Animal10N/
├── train
└── test

4. Train

4.1. Stage One : Training Nested Dropout Networks

We first train two Nested Dropout networks separately to provide reliable base networks for the subsequent stage. You can run the training of this stage by :

  • For training networks on Clothing1M (ResNet-18). You can also train baseline/dropout networks for comparisons. More details are provided in nested/run_clothing1m.sh.
cd nested/ 
# train one Nested network
python3 train_resnet.py --train-dir ../data/Clothing1M/noisy_rand_subtrain1/ --val-dir ../data/Clothing1M/clean_val/ --dataset Clothing1M --arch resnet18 --lrSchedule 5 --lr 0.02 --nbEpoch 30 --batchsize 448 --nested 100 --pretrained --freeze-bn --out-dir ./checkpoints/Cloth1M_nested100_lr2e-2_bs448_freezeBN_imgnet_model1 --gpu 0
  • For training networks on ANIMAL-10N (VGG-19+BN). You can also train baseline/dropout networks for comparisons. More details are provided in nested/run_animal10n.sh.
cd nested/ 
python3 train_vgg.py --train-dir ../data/Animal10N/train/ --val-dir ../data/Animal10N/test/ --dataset Animal10N --arch vgg19-bn --lr-gamma 0.2 --batchsize 128 --warmUpIter 6000 --nested1 100 --nested2 100 --alter-train --out-dir ./checkpoints_animal10n/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model1 --gpu 0

4.2. Stage Two : Fine-tuning with Co-teaching

In this stage, the two trained networks are further fine-tuned with Co-teaching. You can run the training of this stage by :

  • For fine-tuning with Co-teaching on Clothing1M (ResNet-18) :
cd co_teaching_resnet/ 
python3 main.py --train-dir ../data/Clothing1M/noisy_rand_subtrain1/ --val-dir ../data/Clothing1M/clean_val/ --dataset Clothing1M --lrSchedule 5 --nGradual 0 --lr 0.002 --nbEpoch 30 --warmUpIter 0 --batchsize 448 --freeze-bn --forgetRate 0.3 --out-dir ./finetune_ckpt/Cloth1M_nested100_lr2e-3_bs448_freezeBN_fgr0.3_pre_nested100_100 --resumePthList ../nested/checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model1_Acc0.735_K12 ../nested/checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model2_Acc0.733_K15 --nested 100 --gpu 0

The two Nested ResNet-18 networks trained in stage one can be downloaded here: ckpt1, ckpt2. We also provide commands for training Co-teaching from scratch for comparisons in co_teaching_resnet/run_clothing1m.sh.

  • For fine-tuning with Co-teaching on ANIMAL-10N (VGG-19+BN) :
cd co_teaching_vgg/ 
python3 main.py --train-dir ../data/Animal10N/train/ --val-dir ../data/Animal10N/test/ --dataset Animal10N --arch vgg19-bn --lrSchedule 5 --nGradual 0 --lr 0.004 --nbEpoch 30 --warmUpIter 0 --batchsize 128 --freeze-bn --forgetRate 0.2 --out-dir ./finetune_ckpt/Animal10N_alter_nested100_lr4e-3_bs128_freezeBN_fgr0.2_pre_nested100_100_nested100_100 --resumePthList ../nested/checkpoints_animal10n/new_code_nested/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model1_Acc0.803_K14 ../nested/checkpoints_animal10n/new_code_nested/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model2_Acc0.811_K14 --nested1 100 --nested2 100 --alter-train --gpu 0

The two Nested VGG-19+BN networks trained in stage one can be downloaded here: ckpt1, ckpt2. We also provide commands for training Co-teaching from scratch for comparisons in co_teaching_vgg/run_animal10n.sh.

5. Evaluation

To evaluate models' ability of combating with label noise, we compute classification accuracy on a provided clean test set.

5.1. Stage One : Nested Dropout Networks

Evaluation of networks derived from stage one are provided here :

cd nested/ 
# for networks on 
python3 test.py --test-dir ../data/Clothing1M/clean_test/ --dataset Clothing1M --arch resnet18 --resumePthList ./checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model1_Acc0.735_K12 --KList 12 --gpu 0

More details can be found in nested/run_test.sh. Note that "_K12" in the model's name denotes the index of the optimal K, and the optimal number of channels for the model is actually 13 (nb of optimal channels = index of channel + 1).

5.2. Stage Two : Fine-tuning Co-teaching Networks

Evaluation of networks derived from stage two are provided as follows.

  • Networks trained on Clothing1M:
cd co_teaching_resnet/ 
python3 test.py --test-dir ../data/Clothing1M/clean_test/ --dataset Clothing1M --arch resnet18 --resumePthList ./finetune_ckpt/Cloth1M_nested100_lr2e-3_bs448_freezeBN_fgr0.3_pre_nested100_100_model2_Acc0.749_K24 --KList 24 --gpu 0

More details can be found in co_teaching_resnet/run_test.sh.

  • Networks trained on ANIMAL-10N:
cd co_teaching_vgg/ 
python3 test.py --test-dir ../data/Animal10N/test/ --dataset Animal10N --resumePthList ./finetune_ckpt/Animal10N_nested100_lr4e-3_bs128_freezeBN_fgr0.2_pre_nested100_100_nested100_100_model1_Acc0.842_K12 --KList 12 --gpu 0

More details can be found in co_teaching_vgg/run_test.sh.

You might also like...
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)
Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021)

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) Overview Prerequisites Linux Pytho

An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Learning to Bootstrap for Combating Label Noise This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise

Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

A PyTorch implementation of ICLR 2022 Oral paper PiCO: Contrastive Label Disambiguation for Partial Label Learning
A PyTorch implementation of ICLR 2022 Oral paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https://developer.yahoo.com/yql/ RELATE is an Environment for Learning And TEaching
RELATE is an Environment for Learning And TEaching

RELATE Relate is an Environment for Learning And TEaching RELATE is a web-based courseware package. It is set apart by the following features: Focus o

Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching.

A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching. The solver will solve equations of the type: A can be

Search algorithm implementations meant for teaching

Search-py A collection of search algorithms for teaching and experimenting. Non-adversarial Search There’s a heavy separation of concerns which leads

Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!
Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!

smart-school-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

A Machine Teaching Framework for Scalable Recognition

MEMORABLE This repository contains the source code accompanying our ICCV 2021 paper. A Machine Teaching Framework for Scalable Recognition Pei Wang, N

MiniTorch - a diy teaching library for machine learning engineers
MiniTorch - a diy teaching library for machine learning engineers

This repo is the full student code for minitorch. It is designed as a single repo that can be completed part by part following the guide book. It uses

Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

Resources for teaching & learning practical data visualization with python.
Resources for teaching & learning practical data visualization with python.

Practical Data Visualization with Python Overview All views expressed on this site are my own and do not represent the opinions of any entity with whi

PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Comments
  • Second Nested Base Network for Cifar10 and Cifar100

    Second Nested Base Network for Cifar10 and Cifar100

    I look the instructions given in the repository but could not see how to train second nested network before fine-tuning with co-teaching for Cifar10 and Cifar100. Which strategy should be followed here? Should be generate a second synthetically corrupted Cifar10-Cifar100 dataset for second network training?

    opened by botanyldrm 2
  • Validation Set

    Validation Set

    Hi, thanks for this great paper and repository. I have a simple question related to code. As I understood, a clean validation set is used during trainings of CIFAR10 and CIFAR100. Can ratio_val be set to 0 without any problem for trainings? I do not want to use any clean subset for my experiments and just want to see average of test accuracy obtained at the last few epochs.

    opened by botanyldrm 2
  • Could you share the code that transforms ANIMAL-10N dataset to the needed structure?

    Could you share the code that transforms ANIMAL-10N dataset to the needed structure?

    Hi,thanks for your interesting work! I find the code in this repo need ANIMAL-10N images stored in folders by category, while the original dataset is stored as bin files. Could you share the code that transforms ANIMAL-10N dataset to the needed structure?

    opened by ShikunLi 1
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI.

MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an open-source and easy-to-install ecosystem that can run locally on a machine with one or two GPUs. Both server and client work on the same/different machine. However, initial support for multiple users is restricted. It shares the same principles with MONAI.

Project MONAI 344 Dec 23, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
Yunqi Chen 7 Oct 30, 2022
PyTorch implementation of SmoothGrad: removing noise by adding noise.

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 5, 2023
The-White-Noise-Project - The project creates noise intentionally

The-White-Noise-Project High quality audio matters everywhere, even in noise. Be

Ali Hakim Taşkıran 1 Jan 2, 2022
Easy compression and extraction for any compression or archival format.

Tzar: Tar, Zip, Anything Really Easy compression and extraction for any compression or archival format. Usage/Examples tzar compress large-dir compres

DanielVZ 37 Nov 2, 2022