DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion

Overview

DanceTrack

DanceTrack is a benchmark for tracking multiple objects in uniform appearance and diverse motion.

DanceTrack provides box and identity annotations.

DanceTrack contains 100 videos, 40 for training(annotations public), 25 for validation(annotations public) and 35 for testing(annotations unpublic). For evaluating on test set, please see CodaLab.


Paper

DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion

Dataset

Download the dataset from Google Drive or Baidu Drive (code:awew).

Organize as follows:

{DanceTrack ROOT}
|-- dancetrack
|   |-- train
|   |   |-- dancetrack0001
|   |   |   |-- img1
|   |   |   |   |-- 00000001.jpg
|   |   |   |   |-- ...
|   |   |   |-- gt
|   |   |   |   |-- gt.txt            
|   |   |   |-- seqinfo.ini
|   |   |-- ...
|   |-- val
|   |   |-- ...
|   |-- test
|   |   |-- ...
|   |-- train_seqmap.txt
|   |-- val_seqmap.txt
|   |-- test_seqmap.txt
|-- TrackEval
|-- tools
|-- ...

We align our dataset annotations with MOT, so each line in gt.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, 1, 1, 1

Evaluation

We use ByteTrack as an example of using DanceTrack. For training details, please see instruction. We provide the trained models in Google Drive or or Baidu Drive (code:awew).

To do evaluation with our provided tookit, we organize the results of validation set as follows:

{DanceTrack ROOT}
|-- val
|   |-- TRACKER_NAME
|   |   |-- dancetrack000x.txt
|   |   |-- ...
|   |-- ...

where dancetrack000x.txt is the output file of the video episode dancetrack000x, each line of which contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Then, simply run the evalution code:

python3 TrackEval/scripts/run_mot_challenge.py --SPLIT_TO_EVAL val  --METRICS HOTA CLEAR Identity  --GT_FOLDER dancetrack/val --SEQMAP_FILE dancetrack/val_seqmap.txt --SKIP_SPLIT_FOL True   --TRACKERS_TO_EVAL '' --TRACKER_SUB_FOLDER ''  --USE_PARALLEL True --NUM_PARALLEL_CORES 8 --PLOT_CURVES False --TRACKERS_FOLDER val/TRACKER_NAME 
Tracker HOTA DetA AssA MOTA IDF1
ByteTrack 47.1 70.5 31.5 88.2 51.9

Besides, we also provide the visualization script. The usage is as follow:

python3 tools/txt2video_dance.py --img_path dancetrack --split val --tracker TRACKER_NAME

Competition

Organize the results of test set as follows:

{DanceTrack ROOT}
|-- test
|   |-- tracker
|   |   |-- dancetrack000x.txt
|   |   |-- ...

Each line of dancetrack000x.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Archive tracker folder to tracker.zip and submit to CodaLab. Please note: (1) archive tracker folder, instead of txt files. (2) the folder name must be tracker.

The return will be:

Tracker HOTA DetA AssA MOTA IDF1
tracker 47.7 71.0 32.1 89.6 53.9

For more detailed metrics and metrics on each video, click on download output from scoring step in CodaLab.

Run the visualization code:

python3 tools/txt2video_dance.py --img_path dancetrack --split test --tracker tracker

Joint-Training

We use joint-training with other datasets to predict mask, pose and depth. CenterNet is provided as an example. For details of joint-trainig, please see joint-training instruction. We provide the trained models in Google Drive or Baidu Drive(code:awew).

For mask demo, run

cd CenterNet/src
python3 demo.py ctseg --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_mask.pth --debug 4 --tracking 
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/ctseg/default/debug --video_name dancetrack000x_mask.avi

For pose demo, run

cd CenterNet/src
python3 demo.py multi_pose --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_pose.pth --debug 4 --tracking 
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/multi_pose/default/debug --video_name dancetrack000x_pose.avi

For depth demo, run

cd CenterNet/src
python3 demo.py ddd --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_kitti_ddd.pth --debug 4 --tracking --test_focal_length 640 --world_size 16 --out_size 128
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/ddd/default/debug --video_name dancetrack000x_ddd.avi

Agreement

  • The dataset of DanceTrack is available for non-commercial research purposes only.
  • All videos and images of DanceTrack are obtained from the Internet which are not property of HKU, CMU or ByteDance. These three organizations are not responsible for the content nor the meaning of these videos and images.
  • The code of DanceTrack is released under the MIT License.

Acknowledgement

The evaluation metrics and code are from MOT Challenge and TrackEval. The inference code is from ByteTrack. The joint-training code is modified from CenterTrack and CenterNet, where the instance segmentation code is from CenterNet-CondInst. Thanks for their wonderful and pioneering works !

Citation

If you use DanceTrack in your research or wish to refer to the baseline results published here, please use the following BibTeX entry:

@article{peize2021dance,
  title   =  {DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion},
  author  =  {Peize Sun and Jinkun Cao and Yi Jiang and Zehuan Yuan and Song Bai and Kris Kitani and Ping Luo},
  journal =  {arXiv preprint arXiv:2111.14690},
  year    =  {2021}
}
Comments
  • Re-Evaluate QDTrack on Dancetrack

    Re-Evaluate QDTrack on Dancetrack

    Hello :)

    I tried to re-evaluate QDtrack using the weights you provided but I cannot reproduce the results you reported in your paper. I get the following:

    HOTA 45.8 / MOTA 83.0 / IDF1 44.8

    Can you please tell me what I need to do to reproduce the results?

    Best, Jenny

    opened by JennySeidenschwarz 2
  • Evaluetion on DanceTrack test

    Evaluetion on DanceTrack test

    Thank you for your great job. I successfully run the code on DanceTrack test dataset, but I can not get the test result on https://codalab.lisn.upsaclay.fr/competitions/5832. I get this error: image How can I get the results of test set?

    opened by iTruffle 2
  • About the lstm motion model.

    About the lstm motion model.

    Hi, Thanks for sharing your wondulf work. Is the lstm model retrained by you? Or do you use the pre-trained model provided by DEFT? Have a nice day ):

    opened by xiaocc612 2
  • Hello! How many pip packges do I need to install?

    Hello! How many pip packges do I need to install?

    when I try to run ### python3 tools/train.py -f exps/example/dancetrack/yolox_x.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth But I fiailed, it told me there isn't train.py.

    opened by 459737087 1
  • Other tracking algorithms

    Other tracking algorithms

    Hello. Thank you for your astonishing work!

    By the way, would you upload codes for other tracking algorithms (MOTR, TraDes, TransTrack, QDTrack, and so on) described in the paper?

    Thank you.

    opened by kamkyu94 1
  • visualization problem

    visualization problem

    @noahcao Hi!When I run this command in terminal, python tools/txt2video_dance.py --img_path dancetrack --split val --tracker yolox_dancetrack_val, encountered this error: cv2.rectangle(img, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), color_list[bbox[4]%79].tolist(), thickness=6) IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices How should I solve it?Looking forward to your reply.

    opened by yanghaibin-cool 0
  • segmentation image size

    segmentation image size

    When I used the mentioned test method to segment other data, it was obvious that there were two line. How to solve this problem? The size of my input picture is 640*480. At the same time, I use the camera to make the segmented interval more obvious 2022-11-10 20-44-09屏幕截图

    opened by BloodLemonS 0
  • Better metrics for association.

    Better metrics for association.

    Nice dataset! As association appears to be a key focus, it would be great if the owners of the dataset consider computing and publishing the ATA from localmot on the non-public annotation. It better captures association errors as it is less correlated with detection performance than HOTA, IDF1 and AssA.

    opened by abewley 0
  • Is only the detector (e.g. YOLOX) to be trained in ByteTrack?

    Is only the detector (e.g. YOLOX) to be trained in ByteTrack?

    Thanks for your wonderful work! I think data association in ByteTrack is non-deep learning, which is not need to be trained. Is only the detector (e.g. YOLOX) to be trained in ByteTrack?

    opened by imzhangyd 2
  • Cuda out of memory

    Cuda out of memory

    Hello,When I train the model and set the batch size to 2 , it also cuda memory out . Does anyone can help me?

    I try torch.cuda.empty_cache() , but this is not working for me.

    opened by Percyx0313 0
  • Problems with training epochs on Dancetrack

    Problems with training epochs on Dancetrack

    @ifzhang Hi! https://github.com/DanceTrack/DanceTrack/blob/18b60ea52ce03a7ed1710930663d810b461f460e/ByteTrack/exps/example/dancetrack/yolox_x.py#L25 Is it enough to train only 8 epochs on the DanceTrack dataset? Or is it a spelling mistake?Thanks for your reply!

    opened by yanghaibin-cool 2
Owner
null
SLAMP: Stochastic Latent Appearance and Motion Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Predicti

Kaan Akan 34 Dec 8, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 5, 2023
Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo

Fatemeh 38 Dec 12, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 9, 2022
Multiple-Object Tracking with Transformer

TransTrack: Multiple-Object Tracking with Transformer Introduction TransTrack: Multiple-Object Tracking with Transformer Models Training data Training

Peize Sun 537 Jan 4, 2023
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022
Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Online Multiple Object Tracking with Cross-Task Synergy This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking

null 54 Oct 15, 2022
This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object Tracking with TRansformer.

MOTR: End-to-End Multiple-Object Tracking with TRansformer This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object

null 348 Jan 7, 2023
Multiple Object Tracking with Yolov5!

Tracking with yolov5 This implementation is for who need to tracking multi-object only with detector. You can easily track mult-object with your well

null 9 Nov 8, 2022
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

null 260 Nov 28, 2022
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Project Page | Paper | Data This repository contains an implementatio

Lior Yariv 521 Dec 30, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
[CVPR'21] DeepSurfels: Learning Online Appearance Fusion

DeepSurfels: Learning Online Appearance Fusion Paper | Video | Project Page This is the official implementation of the CVPR 2021 submission DeepSurfel

Online Reconstruction 52 Nov 14, 2022
Canonical Appearance Transformations

CAT-Net: Learning Canonical Appearance Transformations Code to accompany our paper "How to Train a CAT: Learning Canonical Appearance Transformations

STARS Laboratory 54 Dec 24, 2022
A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squares.

W.I.P-Aim-Memory-Game A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squar

dE_soot 1 Dec 8, 2021
Hierarchical Uniform Manifold Approximation and Projection

HUMAP Hierarchical Manifold Approximation and Projection (HUMAP) is a technique based on UMAP for hierarchical non-linear dimensionality reduction. HU

Wilson Estécio Marcílio Júnior 160 Jan 6, 2023
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021