Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21)
Introduction
Code for our CVPR 2021 paper "MetaCam+DSCE".
Prerequisites
-
CUDA>=10.0
-
At least two 1080-Ti GPUs
-
Other necessary packages listed in requirements.txt
-
Training Data
(Market-1501, DukeMTMC-reID and MSMT-17. You can download these datasets from Zhong's repo)
Unzip all datasets and ensure the file structure is as follow:
MetaCam_DSCE/data │ └───market1501 OR dukemtmc OR msmt17 │ └───DukeMTMC-reID OR Market-1501-v15.09.15 OR MSMT17_V1 │ └───bounding_box_train │ └───bounding_box_test | └───query │ └───list_train.txt (only for MSMT-17) | └───list_query.txt (only for MSMT-17) | └───list_gallery.txt (only for MSMT-17) | └───list_val.txt (only for MSMT-17)
Usage
See run.sh for details.
Acknowledgments
This repo borrows partially from MWNet (meta-learning), ECN (exemplar memory) and SpCL (faiss-based acceleration). If you find our code useful, please cite their papers.
@inproceedings{shu2019meta,
title={Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting},
author={Shu, Jun and Xie, Qi and Yi, Lixuan and Zhao, Qian and Zhou, Sanping and Xu, Zongben and Meng, Deyu},
booktitle={NeurIPS},
year={2019}
}
@inproceedings{zhong2019invariance,
title={Invariance Matters: Exemplar Memory for Domain Adaptive Person Re-identification},
author={Zhong, Zhun and Zheng, Liang and Luo, Zhiming and Li, Shaozi and Yang, Yi},
booktitle={CVPR},
year={2019},
}
@inproceedings{ge2020selfpaced,
title={Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID},
author={Yixiao Ge and Feng Zhu and Dapeng Chen and Rui Zhao and Hongsheng Li},
booktitle={NeurIPS},
year={2020}
}
Citation
@inproceedings{yang2021meta,
title={Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification},
author={Yang, Fengxiang and Zhong, Zhun and Luo, Zhiming and Cai, Yuanzheng and Li, Shaozi and Nicu, Sebe},
booktitle={CVPR},
year={2021},
}
Resources
-
Pre-trained MMT-500 models to reproduce Tab. 3 of our paper. BaiduNetDisk, Passwd: nsbv. Google Drive.
-
Pedestrian images used to plot Fig.3 in our paper. BaiduNetDisk, Passwd: ydrf. Google Drive.
Please download 'marCam' and 'dukeCam', put them under 'MetaCam_DSCE/data' and uncomment corresponding code. (e.g., L#87-89, L#163-168 of train_usl_knn_merge.py)
Contact Us
Email: [email protected]