Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

Related tags

Deep Learning TVT
Overview

TVT

Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

Datasets:

  • Digit: MNIST, SVHN, USPS

  • Object: Office, Office-Home, VisDA-2017

Training:

Code of ViT is largely borrowed from ViT-pytorch

Citation:

@article{yang2021tvt,
  title={TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation},
  author={Yang, Jinyu and Liu, Jingjing and Xu, Ning and Huang, Junzhou},
  journal={arXiv preprint arXiv:2108.05988},
  year={2021}
}
Comments
  • how to load the .bin file for Attention map visualization

    how to load the .bin file for Attention map visualization

    Hi, sorry to bother you. I'm having trouble saving the training model file and loading the model. I can't load the files I trained and saved in the output folder, _checkpoint.bin and _checkpoint_adv.bin. I want to load the trained model to do some visualization experiments, but I am confused about the .bin file. I can load ViT-B_16.npz normally for Attention map visualization. Can you share how you loaded the .bin file for Attention map visualization?

    here is the code to save model in your train.py.

    `def save_model(args, model, is_adv=False):

    model_to_save = model.module if hasattr(model, 'module') else model if not is_adv: model_checkpoint = os.path.join(args.output_dir, args.dataset, "%s_checkpoint.bin" % args.name) else: model_checkpoint = os.path.join(args.output_dir, args.dataset, "%s_checkpoint_adv.bin" % args.name) torch.save(model_to_save.state_dict(), model_checkpoint) logger.info("Saved model checkpoint to [DIR: %s]", os.path.join(args.output_dir, args.dataset))`

    I'm a newbie, so I'm very sorry to bother you with this question.

    opened by shaojiezhanglalala 3
  • Some questions about TAM and DCM

    Some questions about TAM and DCM

    Hi, thank you very much for publishing the code I've found your paper really interesting and was trying to figure if this can be adapted to my own model (I'm using basically the same configuration as yours). The paper highlights two main components DCM and TAMand i was wondering if this two modules can be extracted from the code.

    1. Looking at the code I get that DCM is just a classification head which is performing a classification task on the domain, as DANN (I also see GRL). Am I correct?

    2. I am having a harder life finding what is called TAM. Is it something that is built inside the blocks? Because I see that the main change is that the adversarial net is used inside each block to generate a loss_ad . Is this what you call patch-level discriminator inside the paper? Is there need to add something else to create the so called TAM

    Thank you very much again for sharing the code

    opened by LanariLorenzo 3
  • No module named 'models.modeling_resnet'

    No module named 'models.modeling_resnet'

    I try to run your code, but get the error as title, and I found there is no file named 'modeling_resnet'(in the code 'import .modeling_resnet import ResNetV2'), are there any file lost? Thanks.

    opened by ShiyeLi 2
  • About the feature visualization in your paper

    About the feature visualization in your paper

    Hello, I'm very happy to read your article. I have some puzzles. If you have time, can you help me to solve them?

    How to realize the feature visualization in your article? What format is the data?

    opened by fanbowen232218094636-spec 1
  • The performance difference when target domain is  Clipart on office home dataset.

    The performance difference when target domain is Clipart on office home dataset.

    Hi!

    Recently, when I was running your method, I encountered a problem. When I used baseline, the performance of all tasks whose target domain is Clipart decreased greatly. Can you help me solve this problem? It's more important to me.

    image

    opened by mzmzdcr 9
  • Some problems in the code/modeling.py

    Some problems in the code/modeling.py

    Hello, I encountered something I didn't understand in the process of reading the code. If you have time, can you help me point it out? At line 99 of the code/modeling.py, this is: if posi_emb is not None: eps=1e-10 batch_size = key_layer.size(0) patch = key_layer ad_out, loss_ad = lossZoo.adv_local(patch[:,:,1:], ad_net, is_source) entropy = - ad_out * torch.log2(ad_out + eps) - (1.0 - ad_out) * torch.log2(1.0 - ad_out + eps) entropy = torch.cat((torch.ones(batch_size, self.num_attention_heads, 1).to(hidden_states.device).float(), entropy), 2) trans_ability = entropy if self.vis else None # [B12197] entropy = entropy.view(batch_size, self.num_attention_heads, 1, -1) attention_probs = torch.cat((attention_probs[:,:,0,:].unsqueeze(2) * entropy, attention_probs[:,:,1:,:]), 2) What I don't understand is, is this place a place to fight against discrimination losses? Which part of Vit did you improve?

    opened by fanbowen232218094636-spec 2
  • The results about vanilla ViT with adversarial adaptation.

    The results about vanilla ViT with adversarial adaptation.

    Hi, thanks for releasing the open-source project of transformer-based UDA! May I ask you some questions?

    I tried to reproduce the Baseline result, which denotes vanilla ViT with adversarial adaptation. Differently, I fixed the bs as 32, the input size as 224 * 224. Accordingly, I got the results on the setting of office31, W@A, D@A. image [1] Safe Self-Refinement for transformer-based Domain Adaptation (https://arxiv.org/pdf/2204.07683.pdf) Obviously, [1] uses the Timm library and gets a very strong baseline. Have you thought about switching to Timm to improve performance? It's hard to compare because everyone implements baseline in different ways and the results vary greatly.:sweat_smile:

    Additionally, I had a problem with the training baseline mentioned before. image Model performance rises and then deteriorates rapidly. The training parameters are as follows: --beta 0.1 --gamma 0. --theta 0. Have you ever encountered such a problem? Can you give me some advice?

    opened by mzmzdcr 9
  • code can not achieve result in paper

    code can not achieve result in paper

    I run some script in ‘script.txt’, but get result far away from result reported in paper. Here are some result: Office-home Pr->Cl:52.37 Cl->Pr:78.64 Cl->Ar:72.39 Office-31 AD:94.98 AW:94.21 DW:76.78 Visda17 Train->Val:80.78

    ps: apex didn't work well for some installation error, but i think it have no effect on result.(have effect on GPU memory and accumulation.) (【fused_weight_gradient_mlp_cuda module not found. gradient accumulation fusion with weight gradient computation disabled.】 I didn't find a resolution for this error)

    opened by ShiyeLi 39
Owner
null
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 4, 2022
Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).

SentiBERT Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020). https://arxiv.org/abs/20

Da Yin 66 Aug 13, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
The official codes of "Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners".

SSL models are Strong UDA learners Introduction This is the official code of paper "Semi-supervised Models are Strong Unsupervised Domain Adaptation L

Yabin Zhang 26 Dec 26, 2022
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation(DANN), support Office-31 and Office-Home dataset

DANN A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation Prerequisites Linux or OSX NVIDIA GPU + CUDA (may CuDNN) and corre

null 8 Apr 16, 2022
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 7, 2022
Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, Pattern Recognition

USDAN The implementation of Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, which is accepte

null 11 Nov 3, 2022
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to your questions. This repo is almost the same with Another-Version, and you can also refer to that version.

CVSM Group -  email: czhu@bupt.edu.cn 84 Dec 12, 2022
Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022) Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Uns

Intelligent Vision for Robotics in Complex Environment 91 Dec 30, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 5, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 9, 2023
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

null 1 Dec 24, 2021
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022