Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Related tags

Deep Learning TADAM
Overview

Online Multiple Object Tracking with Cross-Task Synergy

This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy" Structure of TADAM

Installation

Tested on python=3.8 with torch=1.8.1 and torchvision=0.9.1.

It should also be compatible with python>=3.6, torch>=1.4.0 and torchvision>=0.4.0. Not tested on lower versions.

1. Clone the repository

git clone https://github.com/songguocode/TADAM.git

2. Create conda env and activate

conda create -n TADAM python=3.8
conda activate TADAM

3. Install required packages

pip install torch torchvision scipy opencv-python yacs

All models are set to run on GPU, thus make sure graphics card driver is properly installed, as well as CUDA.

To check if torch is running with CUDA, run in python:

import torch
torch.cuda.is_available()

It is working if True is returned.

See PyTorch Official Site if torch is not installed or working properly.

4. Clone MOTChallenge benchmark evaluation code

git clone https://github.com/JonathonLuiten/TrackEval.git

By now there should be two folders, TADAM and TrackEval.

Refer to MOTChallenge-Official for instructions.

Download the provided data.zip, unzip as folder data and copy inside TrackEval as TrackEva/data.

Move into TADAM folder

cd TADAM

5. Prepare MOTChallenge data

Download MOT16, MOT17, MOT17Det, and MOT20 and place them inside a datasets folder.

Two options to provide datasets location for training/testing:

  • a. Add a symbolic link inside TADAM folder by ln -s path_of_datasets datasets
  • b. In TADAM/configs/config.py, assign __C.PATHS.DATASET_ROOT with path_of_datasets

6. Download Models

The training base of TADAM is a detector pretrained on COCO. The base model coco_checkpoint.pth is provided in Google Drive

Trained models are also provided for reference:

  • TADAM_MOT16.pth
  • TADAM_MOT17.pth
  • TADAM_MOT20.pth

Create a folder output/models and place all models inside.

Train

  1. Training on single GPU, for MOT17 as an example
python -m lib.training.train TADAM_MOT17 --config TADAM_MOT17

First TADAM_MOT17 specifies the output name of the trained model, which can be changed as preferred.

Second TADAM_MOT17 refers to the config file lib/configs/TADAM_MOT17.yaml that loads training parameters. Switch config for respective dataset training. Config files are located in lib/configs.

  1. Training on multiple GPU with Distributed Data Parallel
OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=2 --use_env -m lib.training.train TADAM_MOT17 --config TADAM_MOT17

Argument --nproc_per_node=2 specifies how many GPUs to be used for training. Here 2 cards are used.

Trained model will be stored inside output/models with the specified output name

Evaluate

python -m lib.tracking.test_tracker --result-name xxx --config TADAM_MOT17 --evaluation

Change xxx to prefered result name. --evaluation toggles on evaluation right after obtaining tracking results. Remove it if only running for results without evaluation. Evaluation requires all sequences results of the specified dataset.

Either run evaluation after training, or download and test the provided trained models.

Note that if output name of the trained model is changed, it must be specified in corresponding .yaml config file's line, i.e. replace value in MODEL: TADAM_MOT17.pth with expected model file name.

Code from TrackEval is used for evaluation, and it is set to run on multiple cores (8 cores) by default.

To run an evaluation after obtaining tracking results (with sequences result files), run:

python -m lib.utils.official_benchmark --result-name xxx --config TADAM_MOT17

Replace xxx with the result name, and choose config accordingly.

Tracking results can be found in output/results under respective dataset name folders. Detailed result is stored in a xxx_detailed.csv file, while the summary is given in a xxx_summary.txt file.

Results for reference

The evaluation results on train sets are given here for reference. See paper for reported test sets results.

  • MOT16
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
63.7	91.6	63.9	64.5	99.0	35.6	40.8	23.6	71242	39165
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
689	186	184	211	122	316	58.3	68.0	56.2	86.2
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
62013	48394	9918	71931	110407	446	517
  • MOT17
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
68.0	91.3	68.2	69.0	98.8	43.5	37.5	19.0	232600	104291
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
2845	742	712	615	311	1182	62.0	71.6	60.8	87.0
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
204819	132072	30626	235445	336891	1455	1638
  • MOT20
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
80.2	87.0	80.4	82.2	97.9	64.0	28.8	7.18	932899	201715
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
20355	2275	1418	638	159	2737	69.5	72.3	66.5	79.2
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
754621	379993	198633	953254	1134614	2953	2215

Results could differ slightly, and small variations should be acceptable.

Visualization

A visualization tool is provided to preview datasets' ground-truths, provided detections, and generated tracking results.

python -m lib.utils.visualization --config TADAM_MOT17 --which-set train --sequence 02 --public-detection FRCNN --result xxx --start-frame 1 --scale 0.8

Specify config files, train/test split, and sequence with --config, --which-set, --sequence respectively. --public-detection should only be specified for MOT17.

Replace --result xxx with the tracking results --start-frame 1 means viewing from frame 1, while --scale 0.8 resizes viewing window with given ratio.

Commands in visualization window:

  • "<": previous frame
  • ">": next frame
  • "t": toggle between viewing ground_truths, provided detections, and tracking results
  • "s": save current frame with all rendered elements
  • "h": hide frame information on window's top-left corner
  • "i": hide identity index on bounding boxes' top-left corner
  • "Esc" or "q": exit program

Pretrain detector on COCO

Basic detector is pretrained on COCO dataset, before training on MOT. A Faster-RCNN FPN with ResNet101 backbone is adopted in this code, which can be replaced by other similar detectors with code modifications.

Refer to Object detection reference training scripts on how to train a PyTorch-based detector.

See Tracking without bells and whistles for a jupyter notebook hands-on, which is also based on the aforementioned reference codes.

Publication

If you use the code in your research, please cite:

@InProceedings{TADAM_2021_CVPR,
    author = {Guo, Song and Wang, Jingya and Wang, Xinchao and Tao, Dacheng},
    title = {Online Multiple Object Tracking With Cross-Task Synergy},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021},
}
Comments
  •  visualize target attention  and distractor attention

    visualize target attention and distractor attention

    Have you tried to visualize the target attention and distractor attention, I am confused whether the network can learn their real meaning to distinguish the box embedding from the target's embedding and distractor's embedding?

    opened by chenrxi 6
  • Testing: Low FPS

    Testing: Low FPS

    Hi

    Great work. I've been running test_tracker but I'm getting <2FPS on average. Is there any particular reason why it's rather slow? It's likely not bounded by GPU as the utilisation is rather low. The CPU isn't much utilised either.

    Any advice appreciated.

    opened by hoonkai 4
  • Can't find the dataset error

    Can't find the dataset error

    Hi, thanks for your excellent work. Unfortunately I ran into the problems below:

    No such file or directory: 'datasets/MOT17/MOT17Det/train'

    Could you please help me find what's wrong with it? error

    opened by Jonyond-lin 3
  • do you train your online tracker without public detection?

    do you train your online tracker without public detection?

    Have you ever trained your online tracker without public detection? If so, how to detect new track in private detector mode and distinguish from the track already existed successfully. Also, looking forward to your code and models available and thanks a lot for your great job.

    opened by chenrxi 1
  • Scale up gradient

    Scale up gradient

    Hi,

    Thanks for your great work! Could you please tell me the reason of Scale up gradient shown below: https://github.com/songguocode/TADAM/blob/abd0b7422c3582e36c928778894cee8a159f896e/lib/modules/detector.py#L420 Thanks.

    opened by Leo63963 0
  • Trajectory

    Trajectory

    I tried to visualize on the test set and the following error occurred. Importantly, the visualization on the train set is normal. 2022-01-27 12:57:01 [INFO]: Merged config from 'lib/configs/TADAM_MOT16.yaml' 2022-01-27 12:57:01 [INFO]: Showing MOT16/test/MOT16-06 2022-01-27 12:57:01 [INFO]: Loaded result file at 'output/results/MOT16/test16result/MOT16-06.txt' Traceback (most recent call last): File "/home/lab509/miniconda3/envs/lxg/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/lab509/miniconda3/envs/lxg/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/media/disk1-1t/liangxiaoguo/TADAM/lib/utils/visualization.py", line 360, in show_mot(dataloader, config.PATHS.DATASET_ROOT, config.NAMES.DATASET, args.which_set, sequence, File "/media/disk1-1t/liangxiaoguo/TADAM/lib/utils/visualization.py", line 181, in show_mot det_scores = det_scores[0].detach().clone().cpu().numpy() AttributeError: 'NoneType' object has no attribute 'detach'

    opened by LiangXiaoguo 2
  • --which_set test

    --which_set test

    Thank you very much for your excellent work!I tried to add "-- which_set test"to get the test results, but failed

    python -m lib.tracking.tracker_test --result-name TADAM_MOT17_test --config TADAM_MOT17 --which_set test 2021-11-16 09:54:25 [INFO]: Merged config from 'lib/configs/TADAM_MOT17.yaml' 2021-11-16 09:54:25 [INFO]: Sequence: MOT17-01-DPM 2021-11-16 09:54:26 [INFO]: Successfully loaded pretrained weights from '/home/geqian/Download/TADAM/lib/configs/output/models/TADAM_MOT17.pth' 2021-11-16 09:54:26 [WARNING]: ** The following layers are discarded due to unmatched keys or layer size: '['roi_heads.id_module.classifier.weight', 'roi_heads.id_module.classifier.bias', 'memory_net.loss_classifier.weight', 'memory_net.loss_classifier.bias']' 2021-11-16 09:54:28 [INFO]: Started processing frames Traceback (most recent call last): File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/geqian/Download/TADAM/lib/tracking/tracker_test.py", line 158, in mytest(config, logger, dataset=config.NAMES.DATASET, which_set=args.which_set, File "/home/geqian/Download/TADAM/lib/tracking/tracker_test.py", line 119, in mytest mytest_sequence(dataloader, config, logger, seq_result_file, plot_frames) File "/home/geqian/Download/TADAM/lib/tracking/tracker_test.py", line 40, in mytest_sequence for frame_id, batch in enumerate(dataloader): File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data return self._process_data(data) File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data data.reraise() File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/geqian/miniconda3/envs/TADAM/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/geqian/Download/TADAM/lib/dataset/mot.py", line 273, in getitem det_boxes, det_scores, gt_boxes, gt_ids, gt_visibilities = self._get_annotation(idx) ValueError: not enough values to unpack (expected 5, got 4)

    opened by csgeqian 6
  • test results

    test results

    Thank you very much for your excellent work! I tried to add "-- which_set: test" to the ". yaml "file to get the test results, but failed

    opened by LiangXiaoguo 5
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

Enric Corona 225 Dec 13, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

null 120 Dec 12, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

null 118 Dec 12, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 3, 2023
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

null 9 Nov 14, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 4, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022