Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Overview

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021)

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'. [project] [paper] (The codes are based on our G-SFDA (ICCV 2021))

Note: In the code, we do not explicitly compute the self-regularization loss (you will find the comment in the code), instead we do not explicitly remove the self features in the nearest neighbor retriving where the occurrence frequency of self feature acts as a dynamic weight.

Dataset preparing

Download the VisDA and Office-Home (use our provided image list files) dataset. And denote the path of data list in the code. The code is expected to reproduce the results with PyTorch 1.3 with cuda 10.0.

Checkpoint

You can find all the training log files and the weights (before and after the adaptation) on VisDA and Office-Home in this link. If you want to reproduce the results quickly, please use the provided source model.

VisDA

First train the model on source domain, then do target adaptation without source data:

python train_src.py

python train_tar.py

Office-Home

Code for Office-Home is in the 'office-home' folder.

sh train_src_oh.sh

sh train_tar_oh.sh

PointDA-10

Code in the folder 'pointDA-10' is based on PointDAN. Run the src.sh for source pretraining and tar.sh for source-free domain adaption.

Comments
  • Reproducing office31 results

    Reproducing office31 results

    Good morning, which hyperparameters should I use to reproduce your Office31 results? I tried with the same of Office-Home but I get results way lower than the ones in the paper.

    opened by andreamaracani 7
  • Where is code for self-regularization and L_div?

    Where is code for self-regularization and L_div?

    I have another question. In paper, you have loss for self-regularization and diversity. However, the code only use entropy of prediction as L_div. Why do you implement like this? Thanks.

    opened by chenslcool 5
  • paper and code implementation are different

    paper and code implementation are different

    Thank your for nice job! I notice that you use dot-product to measure simlarity between prediction in paper while use KL_div in code.

    paper: image

    code: https://github.com/Albert0147/NRC_SFDA/blob/1c2616039635710fa704d24a5cf7cfc9b47172bc/office-home/train_tar.py#L315-L317 I wonder why do you do this? Thanks!

    opened by chenslcool 2
  • Requesting requirements.txt or environment.yml

    Requesting requirements.txt or environment.yml

    Hi Shiqi,

    While trying to reproduce this work, I found some conflicting package versions. Can you please share the required package versions and dependencies used for implementing this work?

    Best, Anindya

    opened by mondalanindya 1
  • About reproduction on VISDA-C

    About reproduction on VISDA-C

    First of all, thanks for your excellent work. I have tried to reproduce the results of VISDA-C with your provided source model and codes, and I get a log as follows:

    Task: TV, Iter:866/12990; Accuracy on target = 82.84%
    T: 95.86 82.47 83.18 65.42 94.33 96.53 87.63 80.9 89.38 87.2 88.81 42.34
    Task: TV, Iter:1732/12990; Accuracy on target = 84.01%
    T: 96.46 88.2 82.49 63.41 95.31 96.19 85.46 80.65 90.37 89.35 89.59 50.61
    Task: TV, Iter:2598/12990; Accuracy on target = 84.38%
    T: 96.65 88.6 83.54 62.27 95.25 96.34 87.01 79.97 89.76 91.32 88.62 53.24
    Task: TV, Iter:3464/12990; Accuracy on target = 84.73%
    T: 96.43 88.49 84.09 63.28 95.57 96.77 86.27 79.38 91.69 91.49 90.63 52.72
    Task: TV, Iter:4330/12990; Accuracy on target = 84.87%
    T: 96.63 89.09 83.45 57.55 95.61 96.53 86.18 80.65 92.66 91.54 91.64 56.87
    Task: TV, Iter:5196/12990; Accuracy on target = 85.13%
    T: 96.74 90.39 83.5 59.74 96.25 96.0 85.68 78.92 93.36 93.07 88.81 59.08
    Task: TV, Iter:6062/12990; Accuracy on target = 84.71%
    T: 97.04 90.59 83.71 57.33 96.03 96.82 81.88 79.82 92.28 92.33 90.49 58.22
    Task: TV, Iter:6928/12990; Accuracy on target = 84.81%
    T: 96.9 91.97 83.37 55.54 95.84 95.95 82.47 80.35 92.88 92.59 91.29 58.6
    Task: TV, Iter:7794/12990; Accuracy on target = 84.18%
    T: 96.98 91.42 84.18 48.25 95.65 96.53 82.3 79.72 93.56 93.12 90.53 57.91
    Task: TV, Iter:8660/12990; Accuracy on target = 83.79%
    T: 96.52 92.75 84.41 44.68 95.74 95.95 81.35 79.55 92.77 93.34 91.67 56.8
    Task: TV, Iter:9526/12990; Accuracy on target = 83.87%
    T: 97.2 90.5 84.52 41.13 96.35 96.29 86.37 79.88 93.54 91.93 92.33 56.43
    Task: TV, Iter:10392/12990; Accuracy on target = 83.60%
    T: 97.09 91.83 84.33 39.73 96.1 95.9 85.21 79.32 93.3 93.42 91.9 55.05
    Task: TV, Iter:11258/12990; Accuracy on target = 83.73%
    T: 97.04 90.96 86.76 40.48 96.5 95.76 87.01 79.18 93.6 93.12 90.93 53.48
    Task: TV, Iter:12124/12990; Accuracy on target = 83.71%
    T: 97.42 91.91 85.31 40.59 96.76 95.9 85.04 80.12 93.34 92.15 91.45 54.56
    Task: TV, Iter:12990/12990; Accuracy on target = 83.59%
    T: 96.98 91.31 84.54 39.24 96.4 95.76 85.84 80.05 93.8 93.56 92.23 53.42
    

    The training progress is totally different the one shown in the provided log. Can you kindly give me some suggestions for reproduction?

    opened by tiangarin 1
Owner
Shiqi Yang
PhD candidate @ LAMP group, Computer Vision Center, UAB.
Shiqi Yang
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022
A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN

A Pytorch Implementation of Source Data-free Domain Adaptation for a Faster R-CNN Please follow Faster R-CNN and DAF to complete the environment confi

null 2 Jan 12, 2022
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 4, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 5, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
Code for KDD'20 "An Efficient Neighborhood-based Interaction Model for Recommendation on Heterogeneous Graph"

Heterogeneous INteract and aggreGatE (GraphHINGE) This is a pytorch implementation of GraphHINGE model. This is the experiment code in the following w

Jinjiarui 69 Nov 24, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 5, 2022
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

null 66 Dec 16, 2022
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Yusen Zhang 22 Nov 9, 2022