This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight)

Related tags

Deep Learning T3A
Overview

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization

This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight). This codebase is mainly based on DomainBed, with following modifications:

  • enable to use various backbone networks including Big Transfer (BiT), Vision Transformers (ViT, DeiT, HViT), and MLP-Mixer.
  • enable to test test-time adaptation method (T3A and Tent).

Installation

CUDA/Python

git clone [email protected]:matsuolab/Domainbed_contrib.git
cd Domainbed_contrib/docker
docker build -t {image_name} .
docker run -it -h `hostname` --runtime=nvidia -v /path/to/Domainbed_contrib /path/to/anyware --shm-size=40gb --name {container_name} {image_name}

Python libralies

We use pipenv for package management.

cd /path/to/Domainbed_contrib
pip install pipenv
pipenv install
pipenv shell
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.0+cu102.html

Quick start

(1) Downlload the datasets

python -m domainbed.scripts.download --data_dir=/my/datasets/path --dataset pacs

Note: change --dataset pacs for downloading other datasets (e.g., vlcs, office_home, terra_incognita).

(2) Train a model on source domains

python -m domainbed.scripts.train\
       --data_dir /my/datasets/path\
       --output_dir /my/pretrain/path\
       --algorithm ERM\
       --dataset PACS\
       --hparams "{\"backbone\": \"resnet50\"}" 

This scripts will produce new directory /my/pretrain/path, which include the full training log.

Note: change --dataset PACS for training on other datasets (e.g., VLCS, OfficeHome, TerraIncognita).

Note: change --hparams "{\"backbone\": \"resnet50\"}" for using other backbones (e.g., resnet18, ViT-B16, HViT).

(3) Evaluate model with test time adaptation (Table 1, Table 2, Figure 2)

python -m domainbed.scripts.unsupervised_adaptation\
       --input_dir=/my/pretrain/path\
       --adapt_algorithm=T3A

This scripts will produce a new file in /my/pretrain/path, whose name is results_{adapt_algorithm}.jsonl.

Note: change --adapt_algorithm=T3A for using other test time adaptation methods (T3A, Tent, or TentClf).

(4) Evaluate model with fine-tuning classifier(Figure 1)

python -m domainbed.scripts.supervised_adaptation\
       --input_dir=/my/pretrain/path\
       --ft_mode=clf

This scripts will produce a new file in /my/pretrain/path, whose name is results_{ft_mode}.jsonl.

Available backbones

  • resnet18
  • resnet50
  • BiT-M-R50x3
  • BiT-M-R101x3
  • BiT-M-R152x2
  • ViT-B16
  • ViT-L16
  • DeiT
  • Hybrid ViT (HViT)
  • MLP-Mixer (Mixer-L16)

Reproducing results

Table 1 and Figure 2 (Tuned ERM and CORAL)

You can use scripts/hparam_search.sh. Specifically, for each dataset and base algorithm, you can just type a following command.

sh scripts/hparam_search.sh resnet50 PACS ERM

Note that, it automatically starts 240 jobs, and take many times to finish.

Table 2 and Figure 1 (ERM with various backbone)

You can use scripts/launch.sh. Specifically, for each backbone, you can just type following three commands.

sh scripts/launch.sh pretrain resnet50 10 3 local
sh scripts/launch.sh sup resnet50 10 3 local
sh scripts/launch.sh unsup resnet50 10 3 local

Other results

For table 1, we used scores reported by In Search of Lost Domain Generalization. Full results for the reported scores in LaTeX format available here. Note: We only used scores for VLCS, PACS, OfficeHome, and TerraIncognita. We used the resutls with IIDAccuracySelectionMethod.

License

This source code is released under the MIT license, included here.

You might also like...
Supervised domain-agnostic prediction framework for probabilistic modelling
Supervised domain-agnostic prediction framework for probabilistic modelling

A supervised domain-agnostic framework that allows for probabilistic modelling, namely the prediction of probability distributions for individual data

Official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.

GLIDE This is the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing w

Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Rotation-Only Bundle Adjustment

ROBA: Rotation-Only Bundle Adjustment Paper, Video, Poster, Presentation, Supplementary Material In this repository, we provide the implementation of

Square Root Bundle Adjustment for Large-Scale Reconstruction
Square Root Bundle Adjustment for Large-Scale Reconstruction

RootBA: Square Root Bundle Adjustment Project Page | Paper | Poster | Video | Code Table of Contents Citation Dependencies Installing dependencies on

Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Comments
  • Question about a comparision in Table 1

    Question about a comparision in Table 1

    Hi there, Thanks for the nice paper and source code. I have a question about Table 1. I found that the strong baseline Tent is only compared with Tent-BN and Tent-C which are not proposed version from the tent.

    I noticed well you consider it with Tent-Full in Table 3. Is there any reason that you didn't consider Tent-Full in Table 1?

    opened by mandal4 0
  • DG results

    DG results

    Hi, the paper is interesting! Just a clarification, in your paper, it is said that the default option in "In Search of Lost Domain Generalization" does not use the BN layer. However, it seems that the default option is that BN layers are used but they are in "eval" setting, i.e. the weights and biases of BN layers could be optimized but the running mean and running variance are fixed. Therefore, results in Table 1 copied from "In Search of Lost Domain Generalization" are obtained with BN layers in "eval" setting, not obtained without BN layers. Please correct me if my understanding is wrong :)

    opened by DevinCheung 0
  • Collecting results for T3A after adaptation

    Collecting results for T3A after adaptation

    Thanks for your work. I am wondering what command I should run to get the results of T3A in a text format similar to how it's done in domainbed codebase. In the domainbed code, I can run the following command to obtain the results.

    python -m domainbed.scripts.collect_results\
           --input_dir=/my/sweep/output/path
    

    Here is the result after running T3A. Since the result heavily depends on the parameter filter_k.

    image

    I am wondering if there is a script that automatically picks out the numbers to report. Thanks in advance.

    opened by korawat-tanwisuth 1
Owner
null
Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

null 22 Sep 22, 2022
Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Codebase for Amodal Segmentation through Out-of-Task andOut-of-Distribution Generalization with a Bayesian Model

Yihong Sun 12 Nov 15, 2022
Implementation for "Domain-Specific Bias Filtering for Single Labeled Domain Generalization"

DSBF Introduction This repository contains the implementation code for paper: Domain-Specific Bias Filtering for Single Labeled Domain Generalization

ScottYuan 7 Jan 5, 2023
Official implementation of paper Gradient Matching for Domain Generalization

Gradient Matching for Domain Generalisation This is the official PyTorch implementation of Gradient Matching for Domain Generalisation. In our paper,

null 94 Dec 23, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
Poplar implementation of "Bundle Adjustment on a Graph Processor" (CVPR 2020)

Poplar Implementation of Bundle Adjustment using Gaussian Belief Propagation on Graphcore's IPU Implementation of CVPR 2020 paper: Bundle Adjustment o

Joe Ortiz 34 Dec 5, 2022
PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

Chamuditha Jayanga 53 Dec 23, 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MEGVII Research 22 Jan 5, 2023