The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

Overview

AICITY2021_Track2_DMT

The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

Introduction

Detailed information of NVIDIA AI City Challenge 2021 can be found here.

The code is modified from AICITY2020_DMT_VehicleReID, TransReID and reid_strong baseline.

Get Started

  1. cd to folder where you want to download this repo

  2. Run git clone https://github.com/michuanhaohao/AICITY2021_Track2_DMT.git

  3. Install dependencies: pip install requirements.txt

    We use cuda 11.0/python 3.7/torch 1.6.0/torchvision 0.7.0 for training and testing.

  4. Prepare Datasets Download Original dataset, Cropped_dataset, and SPGAN_dataset.

├── AIC21/
│   ├── AIC21_Track2_ReID/
│   	├── image_train/
│   	├── image_test/
│   	├── image_query/
│   	├── train_label.xml
│   	├── ...
│   	├── training_part_seg/
│   	    ├── cropped_patch/
│   	├── cropped_aic_test
│   	    ├── image_test/
│   	    ├── image_query/		
│   ├── AIC21_Track2_ReID_Simulation/
│   	├── sys_image_train/
│   	├── sys_image_train_tr/
  1. Put pre-trained models into ./pretrained/
    • resnet101_ibn_a-59ea0ac6.pth, densenet169_ibn_a-9f32c161.pth, resnext101_ibn_a-6ace051d.pth and se_resnet101_ibn_a-fabed4e2.pth can be downloaded from IBN-Net
    • resnest101-22405ba7.pth can be downloaded from ResNest
    • jx_vit_base_p16_224-80ecf9dd.pth can be downloaded from here

Trainint and Test

We utilize 1 GPU (32GB) for training. You can train and test one backbone as follow.

# ResNext101-IBN-a
python train.py --config_file configs/stage1/resnext101a_384.yml MODEL.DEVICE_ID "('0')"
python train_stage2_v1.py --config_file configs/stage2/resnext101a_384.yml MODEL.DEVICE_ID "('0')" OUTPUT_DIR './logs/stage2/resnext101a_384/v1'
python train_stage2_v2.py --config_file configs/stage2/resnext101a_384.yml MODEL.DEVICE_ID "('0')" OUTPUT_DIR './logs/stage2/resnext101a_384/v2'

python test.py --config_file configs/stage2/101a_384.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT './logs/stage2/resnext101a_384/v1/resnext101_ibn_a_2.pth' OUTPUT_DIR './logs/stage2/resnext101a_384/v1'
python test.py --config_file configs/stage2/101a_384.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT './logs/stage2/resnext101a_384/v2/resnext101_ibn_a_2.pth' OUTPUT_DIR './logs/stage2/resnext101a_384/v2'

You should train camera and viewpoint models before the inference stage. You also can directly use our trained results (track_cam_rk.npy and track_view_rk.npy):

python train_cam.py --config_file configs/camera_view/camera_101a.yml
python train_view.py --config_file configs/camera_view/view_101a.yml

You can train all eight backbones by checking run.sh. Then, you can ensemble all results:

python ensemble.py

All trained models can be downloaded from here

Leaderboard

TeamName mAP Link
DMT(Ours) 0.7445 code
NewGeneration 0.7151 code
CyberHu 0.6550 code

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{luo2021empirical,
 title={An Empirical Study of Vehicle Re-Identification on the AI City Challenge},
 author={Luo, Hao and Chen, Weihua and Xu Xianzhe and Gu Jianyang and Zhang, Yuqi and Chong Liu and Jiang Qiyi and He, Shuting and Wang, Fan and Li, Hao},
 booktitle={Proc. CVPR Workshops},
 year={2021}
}
Comments
  • train_label.xml not included

    train_label.xml not included

    I have the data directory set up, except for the xml file. Please assist/advise.

    ├── AIC21/ │ ├── AIC21_Track2_ReID/ │ ├── image_train/ │ ├── image_test/ │ ├── image_query/ │ ├── train_label.xml │ ├── ... │ ├── training_part_seg/ │ ├── cropped_patch/ │ ├── cropped_aic_test │ ├── image_test/ │ ├── image_query/ │ ├── AIC21_Track2_ReID_Simulation/ │ ├── sys_image_train/ │ ├── sys_image_train_tr/

    opened by ccfarah 4
  • No such file or directory: 'query_label.xml'

    No such file or directory: 'query_label.xml'

    I found that your train_track.txt file is different from the original data file, and you have generated a new query_label.xml file, which also does not exist in the original data. Can you provide these documents?

    opened by Zhanghao5200 4
  • requirements.txt has a number of unknown version dependencies

    requirements.txt has a number of unknown version dependencies

    The requirements.txt file has a number of packages that are pulled from a local drive, e.g.: albumentations @ file:///home/conda/feedstock_root/build_artifacts/albumentations_1594363620148/work

    Since I can't access these specific packages, I don't know the version or if any other modifications have been made to the package locally. As a result, attempting the installation with the most current version of each package, but this results in installation errors. Recommendations are appreciated.

    opened by ccfarah 3
  • RuntimeError: CUDA out of memory. Tried to allocate 108.00 MiB (GPU 0; 23.65 GiB total capacity; 22.40 GiB already allocated; 67.06 MiB free; 22.62 GiB reserved in total by PyTorch)

    RuntimeError: CUDA out of memory. Tried to allocate 108.00 MiB (GPU 0; 23.65 GiB total capacity; 22.40 GiB already allocated; 67.06 MiB free; 22.62 GiB reserved in total by PyTorch)

    CUDA Out of Memory error but CUDA memory is almost empty

    I currently want to train this model, but I have a runtime error.

    • python train.py --config_file configs/stage1/resnext101a_384.yml MODEL.DEVICE_ID "('0')"

    • RuntimeError: CUDA out of memory. Tried to allocate 108.00 MiB (GPU 0; 23.65 GiB total capacity; 22.40 GiB already allocated; 67.06 MiB free; 22.62 GiB reserved in total by PyTorch).

    According to the message, I trained a model on small amount of data and changed the number of [CUDA_VISIBLE_DEVICES], but I still have a runtime error.

    Should I have more required space..? Any idea what might cause this ?

    image

    opened by 1031kjy 2
  • multi-gpu training

    multi-gpu training

    hello,

    thanks for the brilliant project!

    I want to know how could I train the model with multi-gpu on one machine. Looking forward to hearing from you, thank you in advance!

    opened by TianjinTeda 1
  • RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity

    RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity"

    In the training process, there is always such an error:

    "File "/home/wit/DmtReid/loss/triplet_loss.py", line 102, in hard_example_mining dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True) RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity"

    When I went to check the information printed on the terminal, I found that the number of the query set and quelly set was 1,as follows

    "=> AIC loaded Dataset statistics:

    subset | # ids | # images | # cameras

    train | 440 | 52717 | 40 query | 1 | 1103 | 18 gallery | 1 | 31238 | 18 ----------------------------------------"

    At the same time, I print the relevant information of the error line ,as follows

    "is_neg tensor: ([[False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False]], device='cuda:0') dist_mat: tensor([[8.9935e-01, 1.3378e+01, 9.9388e+00, 1.2196e+01, 8.9468e+00, 1.1898e+01, 1.3501e+01, 6.1112e+00], [1.3378e+01, 3.5938e-01, 1.2367e+01, 1.0789e+01, 1.2657e+01, 9.7843e+00, 1.0603e+01, 1.2570e+01], [9.9388e+00, 1.2367e+01, 1.0000e-06, 1.1123e+01, 6.7836e+00, 1.2270e+01, 1.2560e+01, 9.4772e+00], [1.2196e+01, 1.0789e+01, 1.1123e+01, 1.0000e-06, 1.0024e+01, 9.9253e+00, 7.1910e+00, 1.1304e+01], [8.9468e+00, 1.2657e+01, 6.7836e+00, 1.0024e+01, 1.0000e-06, 1.1718e+01, 1.1748e+01, 8.9490e+00], [1.1898e+01, 9.7843e+00, 1.2270e+01, 9.9253e+00, 1.1718e+01, 5.8055e-01, 1.0002e+01, 1.0799e+01], [1.3501e+01, 1.0603e+01, 1.2560e+01, 7.1910e+00, 1.1748e+01, 1.0002e+01, 4.8299e-01, 1.2935e+01], [6.1112e+00, 1.2570e+01, 9.4772e+00, 1.1304e+01, 8.9490e+00, 1.0799e+01, 1.2935e+01, 1.0000e-06]], device='cuda:0', grad_fn=) dist_ap: tensor([[13.5008], [13.3779], [12.5595], [12.1962], [12.6573], [12.2696], [13.5008], [12.9348]], device='cuda:0', grad_fn=)"

    I don't know where the problem is, can you help me ?

    opened by Zhanghao5200 1
  • pk采样和Loss

    pk采样和Loss

    罗博士您好,我们最近复现了贵组的训练代码,确有一些问题和疑惑,希望您能帮忙指点一二,感谢!

    1、分类Loss为什么没有采用带margin的cosface等,是和训练样本的id分布有关吗; 2、每个epoch的pk采样几乎遍历了全部图片,相比于遍历id每次随机采k张的采样方法,哪种方式更优呢,还是说对于不同的训练数据分布,可以选择不同的pk采样方式; 3、我们用单张v100训练点数确实很高,但换成多张2080Ti的DP训练,掉点比较明显,这个是因为syncBN的原因吗,但感觉掉点幅度偏大; 4、后处理中uda方法,对于mAP涨点幅度大吗,我们比赛时采用uda好像并不能涨太多点;

    谢谢指教~

    opened by JunruChen-Image 1
  • Reranking CPU version returns different distance from CUDA version

    Reranking CPU version returns different distance from CUDA version

    Hi team, thanks for the hard work! A small question is when I tried to switch the reranking from GPU to CPU, I might not get the same reranking result. I tried your numpy implementation as well as to remove cuda in PyTorch implementation and both of them were not able to reproduce the CUDA result. Do you have any idea why this is happening? thanks in advance!

    opened by SuperbTUM 1
  • What does compute_P2 mean?

    What does compute_P2 mean?

    Dear author, I checked the UDA training part of the code, and see there is a function named compute_P2. https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/main/train_stage2_v2.py#L113 However, I did not find the corresponding explanation in the paper... Could you please talk a bit more about this proposal? Thanks.

    opened by SuperbTUM 0
  • RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity

    RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity

    Hi, I have the same problem below the link. https://github.com/michuanhaohao/AICITY2021_Track2_DMT/issues/10#issue-963169121

    I don't know where the problem is, could you help me? Thank you.

    opened by 402650294 0
  • Query regarding Split-Test

    Query regarding Split-Test

    Hello there! Firstly, thank you for your paper for the AI City Challenge!

    I was going through your paper but had difficulty understanding some things that I hope you can spare the time to clarify. It is mentioned in section 4.2 Validation Data:

    Since each team has only 20 submissions, it is necessary to use the validation set to evaluate methods offline. We split the training set of CityFlow-V2 into the training set and the validation set. For convenience, the validation set is denoted as Split-Test. Split-Test includes 18701 images of 88 vehicles.

    So according to this, the (52,717 images, 440 vehicles) in the Original Training Set of CityFlowV2-ReID are split into

    • New Training Set consisting of 34016 images, 352 vehicles
    • New Validation Set (Split-Test) consisting of 18701 images, 88 vehicles

    The Original Test Set from CityFlowV2-ReID is untouched, as well as the queries for it.


    image image

    For the results shown in Table 1 and 2 in the paper, it seems we should train on augmented New Training Set, and evaluate on the New Validation Set (Split-Test), with no validation happening during training.

    However, in the repository, I can't seem to find code for splitting Original Training Set into New Training Set and New Validation Set (Split-Test). And also instead of evaluating on New Validation Set (Split-Test), we are doing evaluation on New Validation Set (Split-Test)

    Instead, it seems like we are just training on the Original Training Set and then evaluating on Original Test Set? I say this because it seems that in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/datasets/aic.py it seems that self.query and self.gallery is just the Original Test Set. And then in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/test.py we use the Original Test Set.

    Is this understanding correct? If it is, then could you advise how we should generate the New Validation Set (Split-Test)? I am hoping to compare other to yours using it as an evaluation dataset

    opened by evantkchong 0
  • inference using pretrained weights

    inference using pretrained weights

    Hi. Can I start with the inference using python test.py --config_file configs/stage2/101a_384.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT './logs/stage2/resnext101a_384/v1/resnext101_ibn_a_2.pth' OUTPUT_DIR './logs/stage2/resnext101a_384/v1'?

    keep getting: File "C:\Users\diva\car_finder\model\make_model.py", line 148, in init self.state_dict()[i].copy_(param_dict[i]) RuntimeError: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 0

    Can someone help? Thnks

    opened by divastar 1
Owner
Hao Luo
Ph.D., Alibaba DAMO Academy&Zhejiang University
Hao Luo
1st place solution in CCF BDCI 2021 ULSEG challenge

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 3, 2022
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 1, 2023
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
My 1st place solution at Kaggle Hotel-ID 2021

1st place solution at Kaggle Hotel-ID My 1st place solution at Kaggle Hotel-ID to Combat Human Trafficking 2021. https://www.kaggle.com/c/hotel-id-202

Kohei Ozaki 18 Aug 19, 2022
Xview3 solution - XView3 challenge, 2nd place solution

Xview3, 2nd place solution https://iuu.xview.us/ test split aggregate score publ

Selim Seferbekov 24 Nov 23, 2022
1st Place Solution to ECCV-TAO-2020: Detect and Represent Any Object for Tracking

Instead, two models for appearance modeling are included, together with the open-source BAGS model and the full set of code for inference. With this code, you can achieve around mAP@23 with TAO test set (based on our estimation).

null 79 Oct 8, 2022
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 2, 2023
City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Code

City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Requirements Python 3.8 or later with all requirements.txt dependencies installed,

null 88 Dec 12, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution ?? Technical report ??️ Presentation ?? Announcement ??Motion Prediction Channel Website ??

null 158 Jan 8, 2023
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 1, 2022
Meli Data Challenge 2021 - First Place Solution

My solution for the Meli Data Challenge 2021

Matias Moreyra 23 Mar 9, 2022
The sixth place winning solution (6/220) in 2021 Gaofen Challenge.

SwinTransformer + OBBDet The sixth place winning solution (6/220) in the track of Fine-grained Object Recognition in High-Resolution Optical Images, 2

ming71 46 Dec 2, 2022
Codebase for the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL BASALT Challenge.

KAIROS MineRL BASALT Codebase for the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL B

Vinicius G. Goecks 37 Oct 30, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

null 249 Jan 3, 2023
4th place solution to datafactory challenge by Intermarché.

Solution to Datafactory challenge by Intermarché. 4th place solution to datafactory challenge by Intermarché. The objective of the challenge is to pre

Raphael Sourty 11 Mar 19, 2022
Kaggle | 9th place (part of) solution for the Bristol-Myers Squibb – Molecular Translation challenge

Part of the 9th place solution for the Bristol-Myers Squibb – Molecular Translation challenge translating images containing chemical structures into I

Erdene-Ochir Tuguldur 22 Nov 30, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

null 12 Oct 25, 2022