Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Overview

Visualizing Adapted Knowledge in Domain Transfer

@inproceedings{hou2021visualizing,
  title={Visualizing Adapted Knowledge in Domain Transfer},
  author={Hou, Yunzhong and Zheng, Liang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Under construction

Overview

This repo dedicates to visualize the learned knowledge in domain adaptation. To understand the adaptation process, we portray the knowledge difference between the source and target model with image translation, using the source-free image translation (SFIT) method proposed in our CVPR2021 paper Visualizing Adapted Knowledge in Domain Transfer.

Specifically, we feed the generated source-style image to the source model, and the original target image to the target model, formulating two branches respectively. Through update the generated image, we force similar outputs between the two branches. When such requirements are met, the image difference should compensate for and can represent the knowledge difference between models.

Content

Dependencies

This code uses the following libraries

  • python 3.7+
  • pytorch 1.6+ & torchvision
  • numpy
  • matplotlib
  • pillow
  • scikit-learn

Data Preparation

By default, all datasets are in ~/Data/. We use digits (automatically downloaded), Office-31, and VisDA datasets.

Your ~/Data/ folder should look like this

Data
├── digits/
│   └── ...
├── office31/ 
│   └── ...
└── visda/
    └── ...

Run the Code

Train source and target models

Once the data preparation is finished, you can train source and target models using unsupervised domain adaptation (UDA) methods

python train_DA.py -d digits --source svhn --target mnist

Currently, we support MMD --da_setting mmd, ADDA --da_setting adda, and SHOT --da_setting shot.

Visualization

Based on the trained source and target models, we visualize their knowledge difference via SFIT

python train_SFIT.py -d digits --source svhn --target mnist
You might also like...
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Original code for
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Repository for the COLING 2020 paper "Explainable Automated Fact-Checking: A Survey."

Explainable Fact Checking: A Survey This repository and the accompanying webpage contain resources for the paper "Explainable Fact Checking: A Survey"

A framework for attentive explainable deep learning on tabular data

🧠 kendrite A framework for attentive explainable deep learning on tabular data 💨 Quick start kedro run 🧱 Built upon Technology Description Links ke

PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

Comments
  • da_setting adda seems no effect

    da_setting adda seems no effect

    python train_DA.py -d digits --source svhn --target mnist --da_setting adda

    Training target model... Train Epoch: 22, Batch:937, S: [c: 0.000, d: 1.354], Time: 12.886 Testing target model on [source]... Test, loss: 4.267, prec: 6.4%, Time: 2.465 Testing target model on [target]... Test, loss: 4.293, prec: 9.8%, Time: 1.192 73%|████████████████████████████████████████████████████████████████████████████████████▎ | 22/30 [06:11<02:15, 16.93s/it]Training target model... Train Epoch: 23, Batch:937, S: [c: 0.000, d: 1.429], Time: 12.536 Testing target model on [source]... Test, loss: 6.942, prec: 7.8%, Time: 2.410 Testing target model on [target]... Test, loss: 7.577, prec: 10.3%, Time: 1.235 77%|████████████████████████████████████████████████████████████████████████████████████████▏ | 23/30 [06:27<01:57, 16.75s/it]Training target model... Train Epoch: 24, Batch:937, S: [c: 0.000, d: 1.375], Time: 13.038 Testing target model on [source]... Test, loss: 14.831, prec: 9.7%, Time: 2.508 Testing target model on [target]... Test, loss: 15.324, prec: 9.8%, Time: 1.543 80%|████████████████████████████████████████████████████████████████████████████████████████████ | 24/30 [06:44<01:41, 16.91s/it]Training target model... Train Epoch: 25, Batch:937, S: [c: 0.000, d: 1.367], Time: 12.633 Testing target model on [source]... Test, loss: 6.635, prec: 6.1%, Time: 2.474 Testing target model on [target]... Test, loss: 7.577, prec: 10.1%, Time: 1.280 83%|███████████████████████████████████████████████████████████████████████████████████████████████▊ | 25/30 [07:01<01:23, 16.80s/it]Training target model... Train Epoch: 26, Batch:937, S: [c: 0.000, d: 1.390], Time: 12.824 Testing target model on [source]... Test, loss: 9.628, prec: 15.9%, Time: 2.390 Testing target model on [target]... Test, loss: 10.641, prec: 10.3%, Time: 1.202 87%|███████████████████████████████████████████████████████████████████████████████████████████████████▋ | 26/30 [07:17<01:06, 16.73s/it]Training target model... Train Epoch: 27, Batch:937, S: [c: 0.000, d: 1.447], Time: 12.648 Testing target model on [source]... Test, loss: 8.766, prec: 11.1%, Time: 3.055 Testing target model on [target]... Test, loss: 8.690, prec: 10.1%, Time: 1.376

    .. bad effect, any problems?

    opened by xiaoyuan1996 1
  • target model optimizer in DA

    target model optimizer in DA

    optimizer_T in train_DA.py of 'SHOT' method only optimizes net_T.base.parameters() without net_T.bottleneck ,but function train_net_T() in da_trainer.py sets self.net_T.bottleneck.train() mode. Don't they conflicts with each other?

    opened by chenxi52 1
  • custom dataset

    custom dataset

    Hello, thanks for your work. I want to know if I can use my own dataset training model?What would my dataset look like if I could?Thanks for your kind attention and look forward your prompt reply.

    opened by zmj876902008 1
Owner
Yunzhong Hou
Yunzhong Hou, a PhD student at ANU.
Yunzhong Hou
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 7, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

null 66 Dec 16, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 5, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022