PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

Overview

DirectCLR

DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It is able to prevent dimensional collapse and outperform SimCLR with a linear projector.

DirectCLR

PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning.

@article{Jing2021UnderstandingDC,
  title={Understanding Dimensional Collapse in Contrastive Self-supervised Learning},
  author={Li Jing and Pascal Vincent and Yann LeCun and Yuandong Tian},
  journal={arXiv preprint arXiv:2110.09348},
  year={2021}
}

DirectCLR Training

Install PyTorch and download ImageNet by following the instructions in the requirements section of the PyTorch ImageNet training example. The code has been developed for PyTorch version 1.7.1 and torchvision version 0.8.2, but it should work with other versions just as well.

Our best model is obtained by running the following command:

python main.py --data /path/to/imagenet/ --mode directclr --dim 360

Mode can be chosen as:

simclr: standard SimCLR with two layer nonlinear projector;

single: SimCLR with single layer linear projector;

baseline: SimCLR without a projector;

directclr: DirectCLR with single layer linear projector;

Training time is approximately 7 hours on 32 v100 GPUs.

Evaluation: Linear Classification

Train a linear probe on the representations. Freeze the weights of the resnet and use the entire ImageNet training set.

python linear_probe.py /path/to/imagenet/ /path/to/checkpoint/resnet50.pth

Linear probe time is approximately 20 hours on 8 v100 GPUs.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

You might also like...
Self-Supervised Contrastive Learning of Music Spectrograms
Self-Supervised Contrastive Learning of Music Spectrograms

Self-Supervised Music Analysis Self-Supervised Contrastive Learning of Music Spectrograms Dataset Songs on the Billboard Year End Hot 100 were collect

Code for CVPR 2021 oral paper
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

A PyTorch implementation of Mugs proposed by our paper
A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".

Mugs: A Multi-Granular Self-Supervised Learning Framework This is a PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-

PyTorch implementation of
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Comments
  • Question about the

    Question about the "real" benefit of DirectCLR

    Hi, I'm recently studying self-supervised learning models.

    While reading your interesting paper, I got a question about what model should I choose for real use. According to Table 1 in your paper, SimCLR with 2-layer nonlinear projector seems much better than DirectCLR, and dimensional collapse free.

    Is there any reason to use DirectCLR instead of SimCLR? Maybe training time whatever.

    Thank you!

    opened by ZeroAct 2
  • Questions regarding the paper

    Questions regarding the paper

    Hi! I have some questions regarding the singular value spectrums in the paper:

    • Are the main settings for figure 2 the same as figure 7(b)? Looks like both are SimCLR with a 2-layer MLP projector, but one has dimensional collapse and the other does not. Do you calculate both with the embedding vectors (after projector)?
    • And for your conclusions:
      • Does SimCLR (with projector) have dimensional collapse in the representation space or embedding space? Or both not?
      • With DirectCLR it will still have dimensional collapse in the embedding space, but not in the representation space?
    opened by wangyi111 2
  • The cls loss's meaning?

    The cls loss's meaning?

    Hello,

    After reading the paper, I have learned a new perspective on contrastive learning. But after reading the code, I found https://github.com/facebookresearch/directclr/blob/a86f64c9b0815ff19cfea9929cb7dc6bc293b5d4/directclr/main.py#L199 the cls loss. If you use the labels for training, is it not called a self-supervised learning?

    Thank you!

    opened by XiaoQiSAMA 0
  • Are the representations or the checkpoints available ?

    Are the representations or the checkpoints available ?

    Hello,

    I read your article with a deep interest, and I am still stunned by the way you fixed the dimensional collapse in DirectCLR.

    I was wondering if you could release the checkpoints of your experiments, and/or the representations and embeddings if you have them at disposal please. I would also like to know your exact source for the ImageNet dataset as several can be considered.

    Thank you very much for your great article.

    opened by NathanGodey 3
Owner
Meta Research
Meta Research
Saeed Lotfi 28 Dec 12, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

SelfReg PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.0

null 64 Dec 16, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 2, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022