code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

Related tags

Deep Learning G-SFDA
Overview

G-SFDA

Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper].

Dataset preparing

Download the VisDA and Office-Home dataset. And denote the path of data list in the code.

Training

First train the model on source data with both source and target attention, then adapt the model to target domain in absence of source data. We use embedding layer to automatically produce the domain attention.

sh visda.sh (for VisDA)
sh office-home.sh (for Office-Home)

We provide the training log files, source model and target model on VisDA in this link. You can directly start the source-free adaptation from our source model to reproduce the results.

Domain Classifier

The file 'domain_classifier.ipynb' contains the code for training domain classifier and evaluating the model with estimated domain ID (on VisDA).

Comments
  • Question about VisDA2017

    Question about VisDA2017

    Good work!

    we know VisDA2017 has three part: train dataset, validation dataset, and test dataset.

    In code (train_src_visda.py and train_tar_visda.py)

    for training source model, G-SFDA use VisDA's train dataset (90% train and 10% test) for training target model, G-SFDA use VisDA's validation dataset (shuffle =ture, batchSize) for testing target model, G-SFDA also use VisDA's validation dataset (shuffle =false, batchSize*3)

    one question: why not use VisDA's test dataset?

    opened by miss-rain 13
  • About the differences in results between tables 1-4

    About the differences in results between tables 1-4

    Thanks for your great contribution to the SFDA task. I am really impressed by the method in your G-SFDA paper. However, we found that there are some differences in results between tables 1-4. Specifically, for the VisDA-C dataset, in table 1, the avg result is 85.4, but under the same condition, the avg result in table 3 is 85.0 ...... Also, this difference is the same as the OfficeHome dataset, in table 2, the avg result is 71.3, but under the same condition, the avg result in table 4 is 70.8 ...... Could you explain the reasons for these differences?

    opened by sanqingqu 8
  • About training office home dataset

    About training office home dataset

    Thanks for your ingenious methods. I am really impressed by the methods 👍 However, I'd like to ask few questions about while training. I first downloaded VIsDA dataset, but it seems like the dataset structure is a bit complicated, so I instead used office home dataset first. I am a little bit unsure that is it right to modify the Art, Clipart, ....txt files to fit my environment? Also, after downloading the resnet baseline model at the beginning after running the training command, it seems that there is no output for like an hour. May I ask how long it takes to train the office-home dataset? Thanks for reading this issue. I really appreciate it.

    opened by mincheoree 6
  • Concern about the Source-Free assumption

    Concern about the Source-Free assumption

    Hi, I have some doubts about the source-free assumption you do in the paper. According to Algorithm 1, the pretrained Source model also employs A_t. In my understanding this is not a source-pretraining stage, in fact adaptation is already happening, since the model is exposed to both source and target. Besides, this means source and target are available at the same time, i.e. the source free assumption drops.

    I would be happy if you could clarify this. Best, P.

    opened by pmorerio 3
  • Are test data the same as data used for adaptation?

    Are test data the same as data used for adaptation?

    Hi. Thanks for sharing your research.

    I was wondering if the test data from target domain are the same as the data used for adaptation on the target domain. Because it seems your source code gives the same path to both dataset(https://github.com/Albert0147/G-SFDA/blob/main/train_tar_visda.py#L392-L395) . In my opinion, test data should not be accessed before evaluation even if they are unlabeled. Am I missing something, or is it an intended behavior?

    Best regards.

    opened by dltkddn0525 2
  • About At

    About At

    Good work! I have two questions:

    1. The generation of the sparse domain attention (SDA) vector in the code is different from that in the paper. Why? In your paper, there is an embedding layer. But in the code, the sparse domain attention (SDA) vector is initialized and regularized by their norm.

    2. Why target domain attention can be generalized by only using the source data? It seems that At is generalized by feeding source data to a different path with a mask that is initialized differently. Why does this work?

    opened by xyy-ict 2
  • How to train At?

    How to train At?

    Thanks for your interesting work. I'm impressed with your method, but a little confused. In the paper, "As and At are both trained on the source domain and are fixed during the adaptation to the target domain." How do you train At without target domain data?

    opened by ddghost 1
Owner
Shiqi Yang
PhD candidate @ LAMP group, Computer Vision Center, UAB.
Shiqi Yang
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift

This repository contains the official code of OSTAR in "Mapping Conditional Distributions for Domain Adaptation Under Generalized Target Shift" (ICLR 2022).

Matthieu Kirchmeyer 5 Dec 6, 2022
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

null 2 Jan 12, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Submodular Subset Selection for Active Domain Adaptation (ICCV 2021)

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation ICCV 2021 Harsh Rangwani, Arihant Jain*, Sumukh K Aithal*, R. Ve

Video Analytics Lab -- IISc 13 Dec 28, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 5, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Audio-Visual Generalized Few-Shot Learning with Prototype-Based Co-Adaptation

Audio-Visual Generalized Few-Shot Learning with Prototype-Based Co-Adaptation The code repository for "Audio-Visual Generalized Few-Shot Learning with

Kaiaicy 3 Jun 27, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

null 19 Oct 14, 2022
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

Jianhao 92 Jan 3, 2023
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

null 66 Dec 16, 2022
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022