[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

Overview

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[Paper]

Prerequisites

To install requirements:

pip install -r requirements.txt
  • Python 3.6
  • GPU Memory: 10GB
  • Pytorch 1.4.0

Getting Started

Download the dataset: Office-31, OfficeHome, VisDA, DomainNet.

Data Folder structure:

Your dataset DIR:
|-Office/domain_adaptation_images
| |-amazon
| |-webcam
| |-dslr
|-OfficeHome
| |-Art
| |-Product
| |-...
|-VisDA
| |-train
| |-validataion
|-DomainNet
| |-clipart
| |-painting
| |-...

You need you modify the data_path in config files, i.e., config.root

Training

Train on one transfer of Office:

CUDA_VISIBLE_DEVICES=0 python office_run.py note=EXP_NAME setting=uda/osda/pda source=amazon target=dslr

To train on six transfers of Office:

CUDA_VISIBLE_DEVICES=0 python office_run.py note=EXP_NAME setting=uda/osda/pda transfer_all=1

Train on OfficeHome:

CUDA_VISIBLE_DEVICES=0 python officehome_run.py note=EXP_NAME setting=uda/osda/pda source=Art target=Product

or

CUDA_VISIBLE_DEVICES=0 python officehome_run.py note=EXP_NAME setting=uda/osda/pda transfer_all=1 

The final results (including the best and the last) will be saved in the ./snapshot/EXP_NAME/result.txt.

Notably, transfer_all wil consumes more shared memory.

Citation

If you find it helpful, please consider citing:

@inproceedings{li2021DCC,
  title={Domain Consensus Clustering for Universal Domain Adaptation},
  author={Li, Guangrui and Kang, Guoliang and Zhu, Yi and Wei, Yunchao and Yang, Yi},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

Comments
  • negative loss of dcc

    negative loss of dcc

    Hello,

    1. When I run the code of PDA on my own dataset, it appears that the loss of dcc is negative. I've read the appendix A, which describes that the loss of dcc represents the inter-class domain discrepancy and intra-class domain discrepancy. If it means that the inter-class domain discrepancy is large, leading to the loss of dcc is negative. How can I fix it? Is it possible to set hyper parameters to fix it?
    2. In the answer of #6 issuse, you replied that num_pclass (A) * num_samples (B) are hypers for class-wise sampling in CDD, it enables each batch composes of num_class* num_samples samples, i.e., contain samples from B classes, and each class contains A samples. So I'm wondering that the num_class and num_samples is set according to the source data or target data?

    Thanks for your reading. I'm looking forward to your reply.

    opened by DongXiaoheng 7
  • Code is not complete

    Code is not complete

    Hello! I wanted to run your method but I have this error:

    $ CUDA_VISIBLE_DEVICES=0 python office_run.py note=replicating_r0 setting=osda source=amazon target=dslr
    Traceback (most recent call last):
      File "office_run.py", line 8, in <module>
        from init_config import *
      File "/home/francesco/projects/Domain-Consensus-Clustering/init_config.py", line 4, in <module>
        from utils.flatwhite import *
      File "/home/francesco/projects/Domain-Consensus-Clustering/utils/__init__.py", line 1, in <module>
        from .cdd import *
      File "/home/francesco/projects/Domain-Consensus-Clustering/utils/cdd.py", line 2, in <module>
        from .dist  import to_cuda
    ModuleNotFoundError: No module named 'utils.dist'
    

    Indeed there is not utils/dist.py file in the repo, could you upload it?

    opened by FrancescoCappio 6
  • Calculation mistake in average of Office-Home OSDA table

    Calculation mistake in average of Office-Home OSDA table

    Hi @Solacex I found a calculation mistake in Table 5 Row4. The average for Office-Home is coming out as 63.01 whereas it's written as 64.2 in the paper. Kindly rectify this.

    I wanted to ask if we should use the correct average when we cite your paper in our new work?

    Thanks!

    opened by suvaansh 2
  • about OSDA office-31 class split details

    about OSDA office-31 class split details

    According to the paper, Office-31 OSDA class slipt setting is 10/0/11, and the classes are separated according to their alphabetical order. But it seems like there is no released config file for this part, so I am quite confused whether target private 11 refers to class20-30 or class10-20? Many thanks.

    opened by changwxx 2
  • I think the use of 't_label' is forbidden for unsupervised learning tasks when training.

    I think the use of 't_label' is forbidden for unsupervised learning tasks when training.

    Why do you use the labels of target domain (i.e. t_label) to calculate loss during training models?I think the use of 't_label' is forbidden for unsupervised learning tasks. PS: .\trainer\dcc1_trainer.py: class Trainer(BaseTrainer): def iter(self, i_iter): ...... en_loss = self.memory.forward(t_feat, t_label, self.config.t, False)

    opened by chihuadelishu 2
  • About OSDA setting on office-31

    About OSDA setting on office-31

    Hi, i read your code and paper. In the paper, you show the results HM (%) on Office under the OSDA scenario (ResNet-50). However, in the code, the class setting of Office is inconsistent with the previous method such as STA and ROS. In the previous method, the setting of office-31 is 10 known classes and 11 unknown classes. However, in the code, the unknown classes is the left 21 classes. Could u please explain it? Thanks! image image

    opened by zzzzzzzzzzzx 2
  • Ambiguity in the acc.

    Ambiguity in the acc.

    Hi @Solacex

    Can you please tell me what "Acc." refers to in table 1. It would be great if you could tell how to calculate that exactly. Is it OS or OS* as defined in prior works or is it something else?

    Thanks!

    opened by suvaansh 1
  • Which result you report in your paper, the result of the last step or the best step?

    Which result you report in your paper, the result of the last step or the best step?

    Sorry to bother you. I have the same question as the previous issue. Which result you report in your paper, the result of the last step or the best step? Thank you for your reply.
    Best wishes.

    opened by chihuadelishu 1
  • final results

    final results

    Thanks a lot for open-sourcing such a wondeful work!

    I have a question about the final results. Which result you report in your paper, the result of the last step or the best step.

    Look forward to your reply. Thanks in advance.

    opened by zhaoxin94 1
  • Couldn't find requirements.txt file

    Couldn't find requirements.txt file

    Thanks for sharing your code! In Prerequisites section, you suggest input this command :pip install -r requirements.txt to install any toolkits needed. However, I could not find requirements.txt this file in any folder. Where could I find this txt file?

    Many Thanks

    opened by Chloe1997 1
  • Where does the loader in Trainer come from ?

    Where does the loader in Trainer come from ?

    Where does self.s_loader (line 19), self.loader (line 37), self.t_loader (line 58) in Trainer (dcc1_trainer.py) come from ? In BaseTrainer, you set self.test_loader, self.src_loader, self.tgt_loader. Does these loaders correspond to the loaders in Trainer?

    opened by XiHuYan 0
  • Huge gap between result and paper, CSDA

    Huge gap between result and paper, CSDA

    Hi Team,

    Thanks for sharing the great work!

    I've noticed in offcehome_run.py, CSDA is not provided.

    https://github.com/Solacex/Domain-Consensus-Clustering/blob/5a626e1a7b294ed8fa71205bbc549a93b566e864/offcehome_run.py#L28 After I added: elif config.setting=='cda': config.cls_share = 65 config.cls_src = 0 config.cls_total = 65

    I noticed there is a huge gap between the result and paper you reported in support material Table C. here is the result I've got: Ar->Pr[best]: 0.6168955022899004 0.6168955022899004 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Ar->Pr[last]: 0.3347919933497906 0.3347919933497906 [H-Score]: 0.0 0.0 0.0 Ar->Cl[best]: 0.4183573762957866 0.4183573762957866 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Ar->Cl[last]: 0.3801841050529709 0.3801841050529709 [H-Score]: 0.0 0.0 0.0 Ar->Re[best]: 0.6801177620887756 0.6801177620887756 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Ar->Re[last]: 0.6801177620887756 0.6801177620887756 [H-Score]: 0.0 0.0 0.0 Pr->Ar[best]: 0.4206200426587692 0.4206200426587692 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Pr->Ar[last]: 0.3828089099090833 0.3828089099090833 [H-Score]: 0.0 0.0 0.0 Pr->Cl[best]: 0.3435283417598559 0.3435283417598559 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Pr->Cl[last]: 0.29526575485674234 0.29526575485674234 [H-Score]: 0.0 0.0 0.0 Pr->Re[best]: 0.6622800114636238 0.6622800114636238 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Pr->Re[last]: 0.6590736272243353 0.6590736272243353 [H-Score]: 0.0 0.0 0.0 Cl->Ar[best]: 0.44268233615618485 0.44268233615618485 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Cl->Ar[last]: 0.32432382525159764 0.32432382525159764 [H-Score]: 0.0 0.0 0.0 Cl->Pr[best]: 0.5993400445924355 0.5993400445924355 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Cl->Pr[last]: 0.5993400445924355 0.5993400445924355 [H-Score]: 0.0 0.0 0.0 Cl->Re[best]: 0.5868140732009823 0.5868140732009823 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Cl->Re[last]: 0.5521672795311763 0.5521672795311763 [H-Score]: 0.0 0.0 0.0 Re->Ar[best]: 0.5438899568067147 0.5438899568067147 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Re->Ar[last]: 0.48911217634494486 0.48911217634494486 [H-Score]: 0.0 0.0 0.0 Re->Pr[best]: 0.7569770986070999 0.7569770986070999 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Re->Pr[last]: 0.7569770986070999 0.7569770986070999 [H-Score]: 0.0 0.0 0.0 Re->Cl[best]: 0.44347345402034427 0.44347345402034427 [H-Score]: 0.0 0.0 0.0 0.0 0.0 Re->Cl[last]: 0.44347345402034427 0.44347345402034427 [H-Score]: 0.0 0.0 0.0

    And here is result in the SM you've reported: image

    Except for this CSDA experiments, the other experiments go well, though a little unstable.

    opened by gavinatthu 2
  • NOW the version is updated

    NOW the version is updated

    Most of my packages are newer than you.Maybe there won't be a big problem,My video memory is only 8GB,I hope I can finish this project,I hope you can help me then,thanks!

    opened by XDUWZDX 0
  • How to set num_pclass and num_sample params?

    How to set num_pclass and num_sample params?

    By looking at the config files for Office and OfficeHome exps I noticed that one of the difference is in the values of num_pclass and num_samples parameters for data loading. How are these params used? How should I choose their values if testing the code on a different dataset?

    opened by FrancescoCappio 3
  • Could you provide the training code on the VisDA dataset?

    Could you provide the training code on the VisDA dataset?

    On the one hand, I wonder if you could offer the training code on the VisDA dataset. On the other hand, there are still some mistakes in this project, as the lack of ./utils/dist.py. Looking forward to your reply~

    opened by Hongbin98 5
Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

THUML @ Tsinghua University 229 Dec 23, 2022
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 7, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" ?? ?? . ?? Re

Shuai Shen 87 Dec 28, 2022
PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in clustering (CVPR2021)

PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering Jang Hyun Cho1, Utkarsh Mall2, Kavita Bala2, Bharath Harihar

Jang Hyun Cho 164 Dec 30, 2022
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 173 Dec 14, 2022
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.

DeepConsensus DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS)

Google 149 Dec 19, 2022
A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

Zain 1 Feb 1, 2022
Official Implementation of Domain-Aware Universal Style Transfer

Domain Aware Universal Style Transfer Official Pytorch Implementation of 'Domain Aware Universal Style Transfer' (ICCV 2021) Domain Aware Universal St

KibeomHong 80 Dec 30, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

null 248 Dec 4, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

null 250 Jan 8, 2023