A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Overview

A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Paper

This repository contains code for the paper: A Broad Study on the Transferability of Visual Representations with Contrastive Learning

Prerequisites

  • PyTorch 1.7
  • pytorch-lightning 1.1.5

Install the required dependencies by:

pip install -r environments/requirements.txt

How to Run

Download Datasets

The data should be located in ~/datasets/cdfsl folder. To download all the datasets:

bash data_loader/download.sh 

Training

python main.py --system ${system}  --dataset ${train_dataset} --gpus -1 --model resnet50 

where system is one of base_finetune(ce), moco (SelfSupCon), moco_mit (SupCon), base_plus_moco (CE+SelfSupCon), or supervised_mean2 (SupCon+SelfSupCon).

To know more about the cli arguments, see configs.py.

You can also run the training script by bash scripts/run_linear_bn.sh -m train.

Evaluation

Linear evaluation

python main.py --system linear_eval \
  --train_aug true --val_aug false \
  --dataset ${val_data}_train --val_dataset ${val_data}_test \
  --ckpt ${ckpt} --load_base --batch_size ${bs} \
  --lr ${lr} --optim_wd ${wd}  --linear_bn --linear_bn_affine false \
  --scheduler step  --step_lr_milestones ${_milestones}

You can also run the evaluation script by bash scripts/run_linear_bn.sh -m tune to hyper-parameter tune, and then bash scripts/run_linear_bn.sh -m test to do linear-evaluation on the optimal hyper-parameter.

Few-shot

python main.py --system few_shot \
    --val_dataset ${val_data} \
    --load_base --test --model ${model} \
    --ckpt ${ckpt} --num_workers 4

You can also run the evaluation script by bash scripts/run_fewshot.sh.

Full-network finetuning

python main.py --system linear_transfer \
    --dataset ${val_data}_train --val_dataset ${val_data}_test \
    --ckpt ${ckpt} --load_base \
    --batch_size ${bs} --lr ${lr} --optim_wd ${wd} \
    --scheduler step  --step_lr_milestones ${_milestones} \
    --linear_bn --linear_bn_affine false \
    --max_epochs ${max_epochs}

You can also run the evaluation script by bash scripts/run_transfer_bn.sh -m tune to hyper-parameter tune, and then bash scripts/run_transfer_bn.sh -m test to do linear-evaluation on the optimal hyper-parameter.

Pretrained models

  • ImageNet pretrained models can be found here

  • mini-ImageNet pretrained models can be found here

You can also convert our pretrained checkpoint into torchvision resnet style checkpoint by python utils/convert_to_torchvision_resnet.py -i [input ckpt] -o [output path]

Citation

If you find this repo useful for your research, please consider citing the paper:

@misc{islam2021broad,
      title={A Broad Study on the Transferability of Visual Representations with Contrastive Learning}, 
      author={Ashraful Islam and Chun-Fu Chen and Rameswar Panda and Leonid Karlinsky and Richard Radke and Rogerio Feris},
      year={2021},
      eprint={2103.13517},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

You might also like...
SUPERVISED-CONTRASTIVE-LEARNING-FOR-PRE-TRAINED-LANGUAGE-MODEL-FINE-TUNING - The Facebook paper about fine tuning RoBERTa with contrastive loss
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Code for Efficient Visual Pretraining with Contrastive Detection

Code for DetCon This repository contains code for the ICCV 2021 paper "Efficient Visual Pretraining with Contrastive Detection" by Olivier J. Hénaff,

The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

Comments
  • eurosat.zip cannot be found on google drive

    eurosat.zip cannot be found on google drive

    eurosat.zip cannot be found on google drive with the url: https://drive.google.com/uc?id=1FYZvuBePf2tuEsEaBCsACtIHi6eFpSwe

    Can you please check the url? Thank you.

    opened by Cohesion97 2
  • How to get CKA scores between different stages in Figure 4?

    How to get CKA scores between different stages in Figure 4?

    Thanks for your amazing study! I have some questions about the CKA scores shown in Figure 4. Take ResNet-50 as an example, which has five stages.

    1. Does stage 5 include the average pooling layer to output the feature of size 1x2048?
    2. Given an input sample, for the feature after each in-between stage (1-4), do you flatten the original feature map (1 x c x h x w) to a vector (1 x chw) OR do you adopt an extra average pooling process to obtain a vector (1 x c)? I've tried to flatten the feature map after each stage but obtained a very high-dimension vector (about 1M).

    (c: feature dimension; h: height; w: width) Looking forward to your reply, thanks.

    opened by klfsalfjl 0
Releases(v0.1.0)
Owner
Ashraful Islam
Ashraful Islam
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

null 51 Dec 1, 2022
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 2, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Unsupervised Contrastive Learning of Sound Event Representations This repository contains the code for the following paper. If you use this code or pa

Eduardo Fonseca 81 Dec 22, 2022
CURL: Contrastive Unsupervised Representations for Reinforcement Learning

CURL Rainbow Status: Archive (code is provided as-is, no updates expected) This is an implementation of CURL: Contrastive Unsupervised Representations

Aravind Srinivas 46 Dec 12, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq ?? Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

Hooman Sedghamiz 18 Oct 21, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 4, 2020