Code for Robust Contrastive Learning against Noisy Views

Related tags

Deep Learning RINCE
Overview

Robust Contrastive Learning against Noisy Views

This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust Contrastive Learning against Noisy Views.

Requirements:

  • PyTorch >= 1.5.0

Pseudo Code

The implementation only requires a small modification to the InfoNCE code.

# bsz : batch size (number of positive pairs)
# pos : exponent for positive example, shape=[bsz]
# neg : sum of exponents for negative examples, shape=[bsz]
# q, lam : hyperparameters of RINCE

info_nce_loss = -log(pos / (pos + neg))
rince_loss = -pos**q / q + (lam * (pos + neg))**q / q

ImagNet Experiments

The code can be found in ImageNet/moco-v3 and ImageNet/simclr.

Citation

If you find this repo useful for your research, please consider citing the paper

@article{chuang2022robust,
  title={Robust Contrastive Learning against Noisy Views},
  author={Chuang, Ching-Yao and Hjelm, R Devon and Wang, Xin and Vineet, Vibhav and Joshi, Neel and Torralba, Antonio and Jegelka, Stefanie and Song, Yale},
  journal={arXiv preprint arXiv:2201.04309},
  year={2022}
}

For any questions, please contact Ching-Yao Chuang ([email protected]).

You might also like...
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

A curated (most recent) list of resources for Learning with Noisy Labels

A curated (most recent) list of resources for Learning with Noisy Labels

NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

CNN Based Meta-Learning for Noisy Image Classification and Template Matching

CNN Based Meta-Learning for Noisy Image Classification and Template Matching Introduction This master thesis used a few-shot meta learning approach to

Training a deep learning model on the noisy CIFAR dataset

Training-a-deep-learning-model-on-the-noisy-CIFAR-dataset This repository contai

Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021].  https://arxiv.org/pdf/2101.12378.pdf
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Comments
  • why i get the loss with negative number

    why i get the loss with negative number

    @chingyaoc when i train the model. i get the negative loss. Batch size with 128 for 4 gpus.

    Epoch: [0][600/781] Time 0.605 ( 0.451) Data 0.000 ( 0.026) LR 1.4405e-06 (7.2023e-07) Loss -1.3995e-01 (5.4105e-03) Epoch: [0][610/781] Time 0.600 ( 0.453) Data 0.000 ( 0.025) LR 1.4645e-06 (7.3223e-07) Loss -2.5937e-01 (1.3689e-03) Epoch: [0][620/781] Time 0.594 ( 0.455) Data 0.000 ( 0.025) LR 1.4885e-06 (7.4424e-07) Loss -1.5513e-01 (-1.7201e-03) Epoch: [0][630/781] Time 0.595 ( 0.458) Data 0.000 ( 0.024) LR 1.5125e-06 (7.5624e-07) Loss -2.2810e-01 (-5.3600e-03) Epoch: [0][640/781] Time 0.596 ( 0.460) Data 0.000 ( 0.024) LR 1.5365e-06 (7.6825e-07) Loss -2.3734e-01 (-8.6938e-03) Epoch: [0][650/781] Time 0.599 ( 0.462) Data 0.000 ( 0.024) LR 1.5605e-06 (7.8025e-07) Loss -1.6923e-01 (-1.2377e-02) Epoch: [0][660/781] Time 0.298 ( 0.462) Data 0.000 ( 0.023) LR 1.5845e-06 (7.9225e-07) Loss -2.4142e-01 (-1.5588e-02) Epoch: [0][670/781] Time 0.294 ( 0.459) Data 0.000 ( 0.023) LR 1.6085e-06 (8.0426e-07) Loss -2.9520e-01 (-1.9183e-02) Epoch: [0][680/781] Time 0.298 ( 0.457) Data 0.001 ( 0.023) LR 1.6325e-06 (8.1626e-07) Loss -3.0449e-01 (-2.3525e-02) Epoch: [0][690/781] Time 0.302 ( 0.455) Data 0.000 ( 0.022) LR 1.6565e-06 (8.2827e-07) Loss -2.5155e-01 (-2.7540e-02) Epoch: [0][700/781] Time 0.296 ( 0.452) Data 0.000 ( 0.022) LR 1.6805e-06 (8.4027e-07) Loss -3.9192e-01 (-3.1093e-02) Epoch: [0][710/781] Time 0.292 ( 0.450) Data 0.000 ( 0.022) LR 1.7045e-06 (8.5227e-07) Loss -2.4313e-01 (-3.4482e-02) Epoch: [0][720/781] Time 0.295 ( 0.448) Data 0.000 ( 0.021) LR 1.7286e-06 (8.6428e-07) Loss -3.2008e-01 (-3.8349e-02) Epoch: [0][730/781] Time 0.295 ( 0.446) Data 0.000 ( 0.021) LR 1.7526e-06 (8.7628e-07) Loss -2.6550e-01 (-4.2157e-02)

    opened by 315386775 1
  • Does not work with latest lightning version

    Does not work with latest lightning version

    I tried installing the latest lightning version as well previous version but it fails to run due to version conflicts. Can you please provide an exact requirement file?

    opened by kartikgupta-at-anu 0
  • Have you changed training hyper-parameters after replacing info-nce with your rince?

    Have you changed training hyper-parameters after replacing info-nce with your rince?

    You trained with batch size of 4096 and learing rate is (4096/256)*0.3=4.8 which is different from mocov3. Did I understand the code correctly?

    I did not find training args of mocov2-resnet50 in the paper(maybe I was too careless to find them), would you please share your training hyper-parameters?

    opened by CoinCheung 0
Owner
Ching-Yao Chuang
Ching-Yao Chuang
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

PGDF This repo is the official implementation of our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ". Citation If you use

CVSM Group -  email: czhu@bupt.edu.cn 22 Dec 23, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
Deep Semisupervised Multiview Learning With Increasing Views (IEEE TCYB 2021, PyTorch Code)

Deep Semisupervised Multiview Learning With Increasing Views (ISVN, IEEE TCYB) Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, Jie Lin, Huaibai Yan, Dez

null 3 Nov 19, 2022
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 1, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
Saeed Lotfi 28 Dec 12, 2022
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

null 111 Dec 29, 2022
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022
LBK 35 Dec 26, 2022
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

null 55 Nov 23, 2022