PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

Overview

SelfReg

PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.09841)

Description

method An overview of our proposed SelfReg. Here, we propose to use the self-supervised (in-batch) contrastive losses to regularize the model to learn domain-invariant representations. These losses regularize the model to map the representations of the "same-class" samples close together in the embedding space. We compute the following two dissimilarities in the embedding space: (i) individualized and (ii) heterogeneous self-supervised dissimilarity losses. We further use the stochastic weight average (SWA) technique and the inter-domain curriculum learning (IDCL) to optimize gradients in conflict directions.

tsne Visualizations by t-SNE for (a) baseline (no DG techniques), (b) RSC, and (c) ours. For better understanding, we also provide sample images of house from all target domains. Note that we differently color-coded each points according to its class. (Data: PACS)

Computational Efficiency of IDCL (Inter-domain Curriculum Learning)

Backbone Training Strategy Training Time(s)
ResNet-18 Baseline (classic training strategy) 1556.8
ResNet-18 IDCL strategy 1283.5

We used one V100 GPU for model training. The training time in above table is the time it took to train all domains independently. The training time of the IDCL is 1283.5 seconds, equivalent to 82.4% of baseline on PACS.

Dependency

  • python >= 3.6
  • pytorch >= 1.7.0
  • torchvision >= 0.8.1
  • jupyter notebook
  • gdown

How to Use

  1. cd codes/ and sh download.sh to download PACS dataset.
  2. Open train.ipynb and Run All.
  3. Make sure that the training is running well in the last cell.
  4. Check the results stored in path codes/resnet18/{save_name}/ when the training is completed.

Test trained SelfReg ResNet18 model

To test a ResNet18, you can download pretrained weights (SelfReg model) with this link.

These weights are wrapped torch.optim.swa_utils.AveragedModel() (SWA implementation of PyTorch).

Backbone Target Domain Acc %
ResNet-18 Photo 96.83
ResNet-18 Art Painting 83.15
ResNet-18 Cartoon 79.61
ResNet-18 Sketch 78.90

Benchmark of SelfReg in DomainBed

domainbed

You might also like...
Self-Supervised Contrastive Learning of Music Spectrograms
Self-Supervised Contrastive Learning of Music Spectrograms

Self-Supervised Music Analysis Self-Supervised Contrastive Learning of Music Spectrograms Dataset Songs on the Billboard Year End Hot 100 were collect

Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

PyTorch implementation of
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

[CVPR 2021]
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

 Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

Comments
  • A question about lr_decay_epoch

    A question about lr_decay_epoch

    Hi, thanks for your great work and code release. I have a question about training.

    It is noticed that in train.ipynb
    lr_decay-epoch is set to 100, but max epoch is 30. In paper, 'note that such a decaying learning rate is not used when ti combined with the Stochastic weights averaging technique'

    It means that: if we use SWA(stochastic weights averaging), we use constant lr (0.004) during training. if we not use SWA, we need to decay lr to 0.004*0.1=0.0004 after epoch 24. Am I right ? Another question is : the results of these 2 settings differ a lot ? Thanks for your help.

    question 
    opened by SakurajimaMaiii 4
  • About a code problem

    About a code problem

    Hello, when I read your code, there is problem. In your code, SelfReg-main/codes/utils.py line 224-226 if is_self_reg: output, feat = model.extract_features(x) proj = model.projection(feat) elif is_positive_pair: output, feat = model.extract_features(x1) output2, feat2 = model.extract_features(x2) I want to know the function about elif, what the is_positive_pair is and x1, x2 is what, Wish your respond.

    question 
    opened by ToxicDoubleH 4
  • A question about the domainbed_code

    A question about the domainbed_code

    Hi, I have a question, in https://github.com/dnap512/SelfReg/blob/05d9777139309f2495bd1e9394d73f4e56402744/domainbed_code/algorithms.py#L75. if the optimizer optimizes only the parameters of self.featurizer and self.classifier, and self.cdpl seems not to be trained.

    question 
    opened by BX-xb 1
  • Regarding the backbone network

    Regarding the backbone network

    Thanks for the wonderful work

    One thing I want to ask is the backbone network of your work. The paper says that you used ResNet18 for the backbone network. However, in DomainBed, all experiments are conducted with ResNet50. You used the same results from DomainBed paper for other baselines (which used ResNet50) in your paper but your method is conducted with ResNet18. Can you clarify the experiment setting regarding such issue?

    Thanks, in advance

    question 
    opened by leebebeto 1
Owner
null
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Saeed Lotfi 28 Dec 12, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

null 62 Dec 22, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023