Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)

Related tags

Deep Learning CDAN
Overview

CDAN

Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)

New version: https://github.com/thuml/Transfer-Learning-Library

Dataset

Digits

Processed SVHN_dataset is here. We change the original mat into images. Other transformed images are in data/svhn2mnist and data/usps2mnist. Dataset_train.txt are lists for source and target domains and Dataset_test.txt are lists for test.

Office-31

Office-31 dataset can be found here.

Office-Home

Office-Home dataset can be found here.

VisDA-2017

VisDA 2017 dataset can be found here in the classification track.

Image-clef

We release the Image-clef dataset we used here.

Training

Training instructions for Caffe and PyTorch are in the README.md in caffe and pytorch respectively.

Tensorflow version is under developing.

Citation

If you use this code for your research, please consider citing:

@inproceedings{long2018conditional,
  title={Conditional adversarial domain adaptation},
  author={Long, Mingsheng and Cao, Zhangjie and Wang, Jianmin and Jordan, Michael I},
  booktitle={Advances in Neural Information Processing Systems},
  pages={1645--1655},
  year={2018}
}

Contact

If you have any problem about our code, feel free to contact

or describe your problem in Issues.

Comments
  • question about training process

    question about training process

    Thanks for your impressive work! When I run your pytorch code "Amazon to Webcam" in my workstation, the printed content "iter: 00499, percision: 0.74214" means the test percision is only 0.74214, which is far from the percision you memtioned in "Conditional Adversarial Domain Adaptation". After training for a long time, the percision cannot improve. the Could you tell me how to train and test it? 1

    opened by ThoamsDong 8
  • Alexnet’s based model performance in pytorch

    Alexnet’s based model performance in pytorch

    Hi. Thanks for your excellent work. I am re-implementing your result for my project. I have an issue: when the base CNN is AlexNet, do you get the same performance on Office 31 data set on Pytorch as your model on Caffe. I try to run your Pytorch code for the task Amazon to Webcam (anh use AlexNet) but get around 71% accuracy after more than 25000 iterations (which is not as reported in the paper). It is worth noticing that your Resnet version works perfectly fine on my computer.

    opened by tung-qle 5
  • DANN

    DANN

    Hello, there was an error when I was running the loss function of DANN. The dimensions did not match. Did you use this code when you were running DANN? I can't find DANN's code online right now.

    opened by ECHOJANE 4
  • SVHN -> MNIST Accuracy

    SVHN -> MNIST Accuracy

    Hi, I am trying to run the code for SVHN to MNIST. However, with CDAN I only got ~84% accuracy (88.5% reported in the paper). Is there anything I should tune? or is the variance of accuracy very high? Also, there is svhn_balanced.txt, how is it generated from svhn.txt? The class distributions are not uniform in both versions. Does the balanced version makes difference? Thank you!

    opened by jongchyisu 3
  • question about pre-process of image for alexnet

    question about pre-process of image for alexnet

    I notice that the pytorch version of your code implements pre-process in pre_process.py all calling function transfrom.toTensor(), which scale range of image to [0,1]. Since your alexnet is based on this repo, in this repo, all image are read as RGB image in range of [0, 255] before feed into alexnet. Now that the Alexnet are pretrained with [0,255] range image, then finetuned(used as backbone) with [0.1] range image, will that hurt the performance of our algorithm?

    opened by meeio 3
  • path

    path

    May I ask which statement should I change to change the path of reading data and the test function in train_svhnminst.py file shows an error in args? How to solve this

    opened by ECHOJANE 2
  • Quick question about visda target-set

    Quick question about visda target-set

    Thanks for releasing your code. Impressive work and a good paper. I see that in the paper you report CDAN's result as 70% on VISDA dataset. Is it on visda's validation or test set. These two can be seen as two targets and the labels for test set is not released right. So is 70% on the validation or after submitting your predictions on the test set to their codalab site.

    Thanks a lot

    opened by kowshikthopalli 2
  • AlexNet Pretrained model

    AlexNet Pretrained model

    Hi, I got the following error.

    FileNotFoundError: [Errno 2] No such file or directory: './alexnet.pth.tar'

    Can you please provide the alexnet.pth.tar file? Thanks in advanced.

    opened by anonymous1computervision 2
  • About the transfer loss in pytorch implementation

    About the transfer loss in pytorch implementation

    Hi, I'm confused about how you implement the entropy conditioning loss as presented in Equation (9) of the paper. I wonder whether you could describe the tricks you used in the implementation, since it is inconsistent with the paper. Specifically, in Line 34 below, I don't understand why 1 is added to the entropy.

    2018-12-17 21-11-56

    Look forward to your reply. Thanks.

    opened by daiquanyu 2
  • Confusion about the data

    Confusion about the data

    I am using the provided data of Image-clef. It is a little confusing what does "b_list.txt, c_list.txt, i_list.txt, list, p_list.txt" means. From the paper I find that you use "c, i, p" for "Caltech-256, ImageNet ILSVRC 2012, Pascal VOC 2012", respectively. And I guess "b" is Bing. Just to check if I understand correctly. Another problem is what is the difference between the files in "list" folder and the other four list files? In your readme file it seems that you are not using the files in the list folder. Is this the same setting in your paper?

    opened by Yikai-Wang 1
  • Question about test dataset.

    Question about test dataset.

    Thanks for the impresive work and sharing code.

    I notice in your code that target data set and test data set are same.

    config["data"] = {"source":{"list_path":args.s_dset_path, "batch_size":36}, \
                              "target":{"list_path":args.t_dset_path, "batch_size":36}, \
                              "test":{"list_path":args.t_dset_path, "batch_size":4}}
    

    Is this common way to deal with test data in Domain Adaptation?

    opened by meeio 1
  • Access for ImageCLEF dataset

    Access for ImageCLEF dataset

    Hello, I have a question about access for usage. I want to access ImageCLEF dataset, but if I click the link, I can't access the dataset, because I don't have access right. Can I have the access right for ImageCLEF dataset?

    opened by JoonHyeokJ 0
  • how to set parameters when training office-home with version of pytorch

    how to set parameters when training office-home with version of pytorch

    C:\Users\Alarak>python D:\桌面\CDAN-master\pytorch\train_image.py --gpu_id id --net ResNet50 --dset office-home --test_interval 2000 --s_dset_path D:\桌面\OfficeHomeDataset_10072016/Art.txt --t_dset_path D:\桌面\OfficeHomeDataset_10072016/Clipart.txt CDAN Traceback (most recent call last): File "D:\桌面\CDAN-master\pytorch\train_image.py", line 278, in train(config) File "D:\桌面\CDAN-master\pytorch\train_image.py", line 84, in train dsets["source"] = ImageList(open(data_config["source"]["list_path"]).readlines(),
    FileNotFoundError: [Errno 2] No such file or directory: 'D:\桌面\OfficeHomeDataset_10072016/Art.txt'

    我按照贵课题组给出的命令格式,下载好office-home数据集后,将命令格式中的../data/office-home替换成了我的实际路径D:\桌面\OfficeHomeDataset_10072016,同时加入了train_image.py的路径。但运行后出现了上面的报错。我按照报错手动在相应目录下创建了Art.txt和clipart.txt后,出现了更多错误,请问应该如何正确设置相关参数?

    opened by Xarlley 1
  • Learning rate setting

    Learning rate setting

    classifier's learning rate set to 10 times than feature extractor according to paper, but It is setting the same in code, is that on purpose after many experiments or just an error?

    opened by zyc573823770 1
  • The parameters in the ad_net

    The parameters in the ad_net

    Hi, I found the parameters are setting as 1024 and 500 for two scenarios, how can I custom it in my cases?

    ad_net = network.AdversarialNetwork(base_network.output_num() * class_num, 1024)

    opened by LotusWhu 0
  • Different results with multi-GPUs

    Different results with multi-GPUs

    I tested the results using one GPU and multi-GPUs in the same server on Office-31. The results are different.

    For the CDAN+E on A->W task: One GPU: around 75% Multi-GPUs (>=2): 92%

    I am still investigating the reason.

    opened by Marsrocky 1
Owner
THUML @ Tsinghua University
Machine Learning Group, School of Software, Tsinghua University
THUML @ Tsinghua University
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 4, 2023
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 1, 2023
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

Yijun Su 3 Oct 9, 2022
Python implementation of Wu et al (2018)'s registration fusion

reg-fusion Projection of a central sulcus probability map using the RF-ANTs approach (right hemisphere shown). This is a Python implementation of Wu e

Dan Gale 26 Nov 12, 2021
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

Benedek Rozemberczki 259 Dec 5, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 9, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 7, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022