A collection of implementations of deep domain adaptation algorithms

Overview

Deep Transfer Learning on PyTorch

MIT License

This is a PyTorch library for deep transfer learning. We divide the code into two aspects: Single-source Unsupervised Domain Adaptation (SUDA) and Multi-source Unsupervised Domain Adaptation (MUDA). There are many SUDA methods, however I find there is a few MUDA methods with deep learning. Besides, MUDA with deep learning might be a more promising direction for domain adaptation.

Here I have implemented some deep transfer methods as follows:

  • UDA
    • DDC:Deep Domain Confusion Maximizing for Domain Invariance
    • DAN: Learning Transferable Features with Deep Adaptation Networks (ICML2015)
    • Deep Coral: Deep CORAL Correlation Alignment for Deep Domain Adaptation (ECCV2016)
    • Revgrad: Unsupervised Domain Adaptation by Backpropagation (ICML2015)
    • MRAN: Multi-representation adaptation network for cross-domain image classification (Neural Network 2019)
    • DSAN: Deep Subdomain Adaptation Network for Image Classification (IEEE Transactions on Neural Networks and Learning Systems 2020)
  • MUDA
    • Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple Sources (AAAI2019)
  • Application
    • Cross-domain Fraud Detection: Modeling Users’ Behavior Sequences with Hierarchical Explainable Network for Cross-domain Fraud Detection (WWW2020)
    • Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising (KDD2021)
  • Survey

Results on Office31(UDA)

Method A - W D - W W - D A - D D - A W - A Average
ResNet 68.4±0.5 96.7±0.5 99.3±0.1 68.9±0.2 62.5±0.3 60.7±0.3 76.1
DDC 75.8±0.2 95.0±0.2 98.2±0.1 77.5±0.3 67.4±0.4 64.0±0.5 79.7
DDC* 78.3±0.4 97.1±0.1 100.0±0.0 81.7±0.9 65.2±0.6 65.1±0.4 81.2
DAN 83.8±0.4 96.8±0.2 99.5±0.1 78.4±0.2 66.7±0.3 62.7±0.2 81.3
DAN* 82.6±0.7 97.7±0.1 100.0±0.0 83.1±0.9 66.8±0.3 66.6±0.4 82.8
DCORAL* 79.0±0.5 98.0±0.2 100.0±0.0 82.7±0.1 65.3±0.3 64.5±0.3 81.6
Revgrad 82.0±0.4 96.9±0.2 99.1±0.1 79.7±0.4 68.2±0.4 67.4±0.5 82.2
Revgrad* 82.6±0.9 97.8±0.2 100.0±0.0 83.3±0.9 66.8±0.1 66.1±0.5 82.8
MRAN 91.4±0.1 96.9±0.3 99.8±0.2 86.4±0.6 68.3±0.5 70.9±0.6 85.6
DSAN 93.6±0.2 98.4±0.1 100.0±0.0 90.2±0.7 73.5±0.5 74.8±0.4 88.4

Note that the results without '*' comes from paper. The results with '*' are run by myself with the code.

Results on Office31(MUDA)

Standards Method A,W - D A,D - W D,W - A Average
ResNet 99.3 96.7 62.5 86.2
DAN 99.5 96.8 66.7 87.7
Single Best DCORAL 99.7 98.0 65.3 87.7
RevGrad 99.1 96.9 68.2 88.1
DAN 99.6 97.8 67.6 88.3
Source Combine DCORAL 99.3 98.0 67.1 88.1
RevGrad 99.7 98.1 67.6 88.5
Multi-Source MFSAN 99.5 98.5 72.7 90.2

Results on OfficeHome(MUDA)

Standards Method C,P,R - A A,P,R - C A,C,R - P A,C,P - R Average
ResNet 65.3 49.6 79.7 75.4 67.5
DAN 64.1 50.8 78.2 75.0 67.0
Single Best DCORAL 68.2 56.5 80.3 75.9 70.2
RevGrad 67.9 55.9 80.4 75.8 70.0
DAN 68.5 59.4 79.0 82.5 72.4
Source Combine DCORAL 68.1 58.6 79.5 82.7 72.2
RevGrad 68.4 59.1 79.5 82.7 72.4
Multi-Source MFSAN 72.1 62.0 80.3 81.8 74.1

Note that (1) Source combine: all source domains are combined together into a traditional single-source v.s. target setting. (2) Single best: among the multiple source domains, we report the best single source transfer results. (3) Multi-source: the results of MUDA methods.

Note

If you find that your accuracy is 100%, the problem might be the dataset folder. Please note that the folder structure required for the data provider to work is:

-dataset
    -amazon
    -webcam
    -dslr

Contact

If you have any problem about this library, please create an Issue or send us an Email at:

Reference

If you use this repository, please cite the following papers:

@inproceedings{zhu2019aligning,
  title={Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources},
  author={Zhu, Yongchun and Zhuang, Fuzhen and Wang, Deqing},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={33},
  pages={5989--5996},
  year={2019}
}
@article{zhu2020deep,
  title={Deep Subdomain Adaptation Network for Image Classification},
  author={Zhu, Yongchun and Zhuang, Fuzhen and Wang, Jindong and Ke, Guolin and Chen, Jingwu and Bian, Jiang and Xiong, Hui and He, Qing},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2020},
  publisher={IEEE}
}
Comments
  • Questions about RevGrad

    Questions about RevGrad

    Hi @easezyc , thanks for your great implementations. When I'm trying RevGrad in pytorch1.0, I have some questions. Would you help me?

    1. In the original paper, it said the optimizer was set as momentum=0.9. However, in line62, the optimizer will be created every iteration, which means momentum will be reset every time. https://github.com/easezyc/deep-transfer-learning/blob/cc97b7d248b7e7d9b187a3bae99eb560c458f89c/UDA/pytorch1.0/RevGrad/RevGrad.py#L62

    2. The optimizer_critic seems do not optimizer_critic.step(). https://github.com/easezyc/deep-transfer-learning/blob/cc97b7d248b7e7d9b187a3bae99eb560c458f89c/UDA/pytorch1.0/RevGrad/RevGrad.py#L63

    3. I tried to solve the questions, but I cannot reproduce the reported results. My modifications are below.

    # defined optimizer_fea before the training loop
        optimizer_fea = torch.optim.SGD([
            {'params': model.sharedNet.parameters(), 'base_lr': lr/10},
            {'params': model.cls_fn.parameters(), 'base_lr': lr},
            {'params': model.domain_fn.parameters(), 'base_lr': lr}
            ], lr=lr, momentum=momentum, weight_decay=l2_decay)
    
    . . .
    
    for i in range(1, iteration+1):
             # update learning rate during training
            for param_group in optimizer_fea.param_groups:
                param_group['lr'] = param_group['base_lr'] / math.pow((1 + 10 * (i - 1) / iteration), 0.75)
    

    Did I miss anything? Thanks for your help!

    opened by YangMengHsuan 7
  • TSNE visualization

    TSNE visualization

    Hello, I was going through the paper DSAN and codebase. I was wondering could you make the TSNE visualization available as well. It will be helpful for me. I will cite your paper

    opened by ashiqimran 7
  • DAN error report

    DAN error report

    Hi, thx for ur code.

    There's an error when I run the DAN code.

    The log can be seen here: image

    The only thing I edited in the code is changing the root_path from /data/zhuyc/OFFICE31/ to ./dataset/Original_images/(except for the print log with "tt" and "ttt"). I checked the size of source data and target data but nothing wrong, both are [32, 3, 224, 224]. Is there any mistake in the code or did I run the wrong way?

    Thank you in advance and have a good day :>

    opened by VoiceBeer 7
  • 关于DFSAN训练过程中数据使用的问题

    关于DFSAN训练过程中数据使用的问题

    您好前辈,非常感谢您的代码。

    我有几个问题:

    1. 在训练过程中,按理来说想使用全部数据的话一般是外面一层epoch循环,内里一层dataloader的循环。但是我看MFSAN里input只用了iterator的单次next(),所以每次epoch的源域和目标域的大小都只是一个batch_size吗?为什么不每次epoch将源域和目标域的所有图片都做迁移,是考虑到时间成本之类的问题吗?或者是采样的技巧吗?

    2. 跟上个问题性质差不多,举个例子在MFSAN2中,按照您的代码的话,在第一个epoch中,是source1的第一个batch跟target的第一个batch去计算,然后source2的第一个batch是跟target的第二个batch去计算了。target不是同一个batch的话,会不会有些影响?

    非常感谢您的回答,祝前辈中好多顶会 :>

    opened by VoiceBeer 6
  • A supplement to dataset of DAN

    A supplement to dataset of DAN

    if you are in a situation where soft-loss is equal to 0 and accuracy is 100%, you probably meet the same problem as me. You might need to check your dataset and ensure that sub-classes are under the category folder such as bike and back_pack instead of images. The correct data set layout is shown in the figure. I hope this works and can help you. 示例图片

    help wanted good first issue 
    opened by MiaoHaoSunny 5
  • Problem about the t-SNE

    Problem about the t-SNE

    您好,非常棒的工作,谢谢您的开源分享! 我在尝试复现您的工作DSAN,但是t-SNE出来的效果不太理想。 请问在A-->W这个任务上,您t-SNE可视化时选取的源域和目标域的样本分别是多少呢? 我两个域都选取的是480个样本,但出来的图效果不好。 如果可以的话能否分享一下t-SNE的代码呢? email: [email protected]

    期待您的回复!谢谢!

    opened by mr6737 3
  • Question about RevGrad

    Question about RevGrad

    Hi @easezyc ,

    You provided both version of the implementation of RevGrad using Pytorch 0.3 and Pytorch 1.0.

    In Pytorch 0.3 the code is like that `class RevGrad(nn.Module):

    def __init__(self, num_classes=31):
        super(RevGrad, self).__init__()
        self.sharedNet = resnet50(False)
        self.cls_fc = nn.Linear(2048, num_classes)
        self.domain_fc = nn.Linear(2048, 2)
    
    def forward(self, data):
        data = self.sharedNet(data)
        clabel_pred = self.cls_fc(data)
        dlabel_pred = self.domain_fc(data)
    
        return clabel_pred, dlabel_pred`
    

    and in Pytorch 1.0 the code is like that:

    class RevGrad(nn.Module):
    
        def __init__(self, num_classes=31):
            super(RevGrad, self).__init__()
            self.sharedNet = resnet50(True)
            self.cls_fn = nn.Linear(2048, num_classes)
            self.domain_fn = AdversarialNetwork(in_feature=2048)
                 
        def forward(self, data):
            data = self.sharedNet(data)
            clabel_pred = self.cls_fn(data)
            dlabel_pred = self.domain_fn(AdversarialLayer(high_value=1.0)(data))
            #print(dlabel_pred)
            return clabel_pred, dlabel_pred
    
    
    
    class AdversarialNetwork(nn.Module):
        def __init__(self, in_feature):
            super(AdversarialNetwork, self).__init__()
            self.ad_layer1 = nn.Linear(in_feature,1024)
            self.ad_layer2 = nn.Linear(1024,1024)
            self.ad_layer3 = nn.Linear(1024, 1)
            self.ad_layer1.weight.data.normal_(0, 0.01)
            self.ad_layer2.weight.data.normal_(0, 0.01)
            self.ad_layer3.weight.data.normal_(0, 0.3)
            self.ad_layer1.bias.data.fill_(0.0)
            self.ad_layer2.bias.data.fill_(0.0)
            self.ad_layer3.bias.data.fill_(0.0)
            self.relu1 = nn.ReLU()
            self.relu2 = nn.ReLU()
            self.dropout1 = nn.Dropout(0.5)
            self.dropout2 = nn.Dropout(0.5)
            self.sigmoid = nn.Sigmoid()
    
        def forward(self, x):
            x = self.ad_layer1(x)
            x = self.relu1(x)
            x = self.dropout1(x)
            x = self.ad_layer2(x)
            x = self.relu2(x)
            x = self.dropout2(x)
            x = self.ad_layer3(x)
            x = self.sigmoid(x)
            return x
    
        def output_num(self):
            return 1
    
    class AdversarialLayer(torch.autograd.Function):
      def __init__(self, high_value=1.0):
        self.iter_num = 0
        self.alpha = 10
        self.low = 0.0
        self.high = high_value
        self.max_iter = 2000.0
        
      def forward(self, input):
        self.iter_num += 1
        output = input * 1.0
        return output
    
      def backward(self, gradOutput):
        self.coeff = np.float(2.0 * (self.high - self.low) / (1.0 + np.exp(-self.alpha*self.iter_num / self.max_iter)) - (self.high - self.low) + self.low)
        return -self.coeff * gradOutput
    

    My question is which method is correct? If both methods are correct, can you please explain a bit. Thanks in advance.

    opened by deepai-lab 3
  • How to understand the MMD Loss?

    How to understand the MMD Loss?

    You have done a great job, but I have a question about the MMD Loss.

    You use the guassian kernel for computing, what are the kernel_mul and kernel_num mean in your code? image Thank you very much.

    opened by taylover-pei 3
  • Domain-specific Classifier Alignment Loss

    Domain-specific Classifier Alignment Loss

    In your paper, you have mentioned the discrepancy loss for aligning the classifiers. In your code I did not find the discrepancy loss for classifiers. Did I miss that? Can you please explain it?

    Thank you in advanced.

    opened by deep0learning 3
  • MRAN-Running error -AttributeError: 'SGD' object has no attribute 'param_group'

    MRAN-Running error -AttributeError: 'SGD' object has no attribute 'param_group'

    Traceback (most recent call last): File "MRAN.py", line 131, in optimizer.param_group[0]['lr'] = args.lr[0] / math.pow((1 + 10 * (epoch - 1) / args.epochs), 0.75) AttributeError: 'SGD' object has no attribute 'param_group'

    opened by zhen1202 2
  • Reproducing t-SNE results

    Reproducing t-SNE results

    Hi,

    First of all, thanks for the great work!

    For DSAN, I am wondering if you can also release the source code of reproducing the T-SNE results. I tried the T-SNE pytorch version but the generated result is different from the one reported in the paper.

    UPDATE: I saw there is a closed post that asking for t-SNE implementation and you want them to provide the email address. Would you please also send me a copy of code? My email address is: [email protected]

    Thanks for your valuable time and I am looking forward to hearing from you soon!

    opened by daddyke 2
  • MFSAN tSNE

    MFSAN tSNE

    Dear Authors

    Thank you very much for your excellent work and for making the code publicly available. As you sent the code for the tSNE visualization to others, I am also requesting the code for MFSAN tSNE. I would really appreciate it. My email: is [email protected]

    Thanks in advance

    opened by rkushol 0
Owner
Yongchun Zhu
ICT Yongchun Zhu
Yongchun Zhu
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 4, 2023
LAMDA: Label Matching Deep Domain Adaptation

LAMDA: Label Matching Deep Domain Adaptation This is the implementation of the paper LAMDA: Label Matching Deep Domain Adaptation which has been accep

Tuan Nguyen 9 Sep 6, 2022
Pytorch Lightning 1.2k Jan 6, 2023
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

DLR-RM 4.7k Jan 1, 2023
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 5, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

null 183 Dec 28, 2022
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

Healthcare Intelligence Laboratory 1.3k Jan 3, 2023
Independent and minimal implementations of some reinforcement learning algorithms using PyTorch (including PPO, A3C, A2C, ...).

PyTorch RL Minimal Implementations There are implementations of some reinforcement learning algorithms, whose characteristics are as follow: Less pack

Gemini Light 4 Dec 31, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

null 41 Apr 28, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

null 96 Nov 25, 2022