Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

Related tags

Deep Learning PGDF
Overview

PGDF

This repo is the official implementation of our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ".

Citation

If you use this code/data for your research, please cite our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ".

@misc{chen2021sample,
      title={Sample Prior Guided Robust Model Learning to Suppress Noisy Labels}, 
      author={Wenkai Chen and Chuang Zhu and Yi Chen},
      year={2021},
      eprint={2112.01197},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Training

Take CIFAR-10 with 50% symmetric noise as an example:

First, please modify the data_path in presets.json to indicate the location of your dataset.

Then, run

python train_cifar_getPrior.py --preset c10.50sym

to get the prior knowledge. Related files will be saved in checkpoints/c10/50sym/saved/.

Next, run

python train_cifar.py --preset c10.50sym

for the subsequent training process.

c10 means CIFAR-10, 50sym means 50% symmetric noise.
Similarly, if you want to take experiment on CIFAR-100 with 20% symmetric noise, you can use the command:

python train_cifar_getPrior.py --preset c100.20sym
python train_cifar.py --preset c100.20sym

Additional Info

The (basic) semi-supervised learning part of our code is borrow from the official DM-AugDesc implementation.

Since this paper has not yet been submitted, we only release part of the experimental code. We will release all the experimental code after this paper is accepted by a conference.

You might also like...
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

An original implementation of
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective Zhengzhuo Xu, Zenghao Chai, Chun Yuan This is the PyTorch implement

Super Pix Adv - Offical implemention of Robust Superpixel-Guided Attentional Adversarial Attack (CVPR2020)

Super_Pix_Adv Offical implemention of Robust Superpixel-Guided Attentional Adver

This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their coordinates and detected labels.
This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their coordinates and detected labels.

This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their

Comments
  • Requirements

    Requirements

    An installation file with the requirements would be handy to install all the dependencies with pip.

    I believe that there is a simple command to generate a requirements.txt given an environment.

    Thanks!

    opened by aldakata 3
  • details about mini webvision1.0 dataset

    details about mini webvision1.0 dataset

    Thanks for your brilliant work first. Actually we try to replicate your results, so i was wondering that how many samples does the mini Webvision 1.0 dataset your used in this work contain? Is that 61K? I was so confused because some works state that they also use a subset of the first 50 classes in webvision1.0 which contains more than 100K samples in training set, such as Aum[https://arxiv.org/abs/2001.10528]. Looking forward for your kindly help

    opened by Christophe-Jia 2
  • GMM after epoch 200

    GMM after epoch 200

    After reading the paper:

    Specifically, as shown in Figure 2, at each epoch, we get the clean probability wit of each sample from training loss by using Gaussian Mixture Model (GMM)

    But on the code this GMM is only accounted for after epoch 200. Is this an error or am I missing something? https://github.com/bupt-ai-cz/PGDF/blob/f7b6de71c0959c8a268a74da126a4e1858aeb290/train_cifar.py#L471

    Thanks in advance!

    opened by aldakata 1
Owner
CVSM Group - email: [email protected]
Codes of our papers are released in this GITHUB account.
CVSM Group -  email: czhu@bupt.edu.cn
Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Haoliang Sun 3 Sep 3, 2022
DIT is a DTLS MitM proxy implemented in Python 3. It can intercept, manipulate and suppress datagrams between two DTLS endpoints and supports psk-based and certificate-based authentication schemes (RSA + ECC).

DIT - DTLS Interception Tool DIT is a MitM proxy tool to intercept DTLS traffic. It can intercept, manipulate and/or suppress DTLS datagrams between t

null 52 Nov 30, 2022
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

null 55 Nov 23, 2022
A curated (most recent) list of resources for Learning with Noisy Labels

A curated (most recent) list of resources for Learning with Noisy Labels

Jiaheng Wei 321 Jan 9, 2023
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 27 Dec 23, 2022
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

null 13 Dec 22, 2022
A GOOD REPRESENTATION DETECTS NOISY LABELS

A GOOD REPRESENTATION DETECTS NOISY LABELS This code is a PyTorch implementation of the paper: Prerequisites Python 3.6.9 PyTorch 1.7.1 Torchvision 0.

REAL@UCSC 64 Jan 4, 2023
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 5, 2022
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

null 43 Dec 26, 2022
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).

PFENet This is the implementation of our paper PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation that has been accepted to IEE

DV Lab 230 Dec 31, 2022