Informative-tracking-benchmark
Informative tracking benchmark (ITB)
- higher diversity. It contains 9 representative scenarios and 180 diverse videos.
- more effective. Sequences are carefully selected based on chellening level, discriminative strength, and density of appearance variations.
- more efficient. It is constructed with 7% out of 1.2 M frames allows saving 93% of evaluation time (3,625 seconds on informative benchmark vs. 50,000 seconds on all benchmarks) for a real-time tracker (24 frames per second).
- more rigorous comparisons. (All the baseline methods are re-evaluated using the same protocol, e.g., using the same training set and finetuning hyper-parameters on a specified validate set).
An Informative Tracking Benchmark, Xin Li, Qiao Liu, Wenjie Pei, Qiuhong Shen, Yaowei Wang, Huchuan Lu, Ming-Hsuan Yang [Paper]
News:
- 2021.12.09 The informative tracking benchmark is released.
Introduction
Along with the rapid progress of visual tracking, existing benchmarks become less informative due to redundancy of samples and weak discrimination between current trackers, making evaluations on all datasets extremely time-consuming. Thus, a small and informative benchmark, which covers all typical challenging scenarios to facilitate assessing the tracker performance, is of great interest. In this work, we develop a principled way to construct a small and informative tracking benchmark (ITB) with 7% out of 1.2 M frames of existing and newly collected datasets, which enables efficient evaluation while ensuring effectiveness. Specifically, we first design a quality assessment mechanism to select the most informative sequences from existing benchmarks taking into account 1) challenging level, 2) discriminative strength, 3) and density of appearance variations. Furthermore, we collect additional sequences to ensure the diversity and balance of tracking scenarios, leading to a total of 20 sequences for each scenario. By analyzing the results of 15 state-of-the-art trackers re-trained on the same data, we determine the effective methods for robust tracking under each scenario and demonstrate new challenges for future research direction in this field.
Dataset Samples
Dataset Download (8.15 GB) and Preparation
[GoogleDrive] [BaiduYun (Code: intb)]
After downloading, you should prepare the data in the following structure:
ITB
|——————Scenario_folder1
| └——————seq1
| | └————xxxx.jpg
| | └————groundtruth.txt
| └——————seq2
| └——————...
|——————Scenario_folder2
|——————...
└------ITB.json
Both txt and json annotation files are provided.
Evaluation ToolKit
The evaluation tookit is wrote in python. We also provide the interfaces to the pysot and pytracking tracking toolkits.
You may follow the below steps to evaluate your tracker.
-
Download this project:
git clone [email protected]:XinLi-zn/Informative-tracking-benchmark.git
-
Run your method with one of the following ways:
base interface.
Integrating your method into the base_toolkit/test_tracker.py file and then running the below command to evaluate your tracker.CUDA_VISIBLE_DEVICES=0 python test_tracker.py --dataset ITB --dataset_path /path-to/ITB
pytracking interface. (pytracking link)
Merging the files in pytracking_toolkit/pytracking to the counterpart files in your pytracking toolkit and then running the below command to evaluate your tracker.CUDA_VISIBLE_DEVICES=0 python run_tracker.py tracker_name tracker_parameter --dataset ITB --descrip
pysot interface. (pysot link)
Putting the pysot_toolkit into your tracker folder and adding your tracker to the 'test.py' file in the pysot_toolkit. Then run the below command to evaluate your tracker.CUDA_VISIBLE_DEVICES=0 python -u pysot_toolkit/test.py --dataset ITB --name 'tracker_name'
-
Compute the performance score:
Here, we use the performance analysis codes in the pysot_toolkit to compute the score. Putting the pysot_toolkit into your tracker folder and use the below commmand to compute the performance score.
python eval.py -p ./results-example/ -d ITB -t transt
The above command computes the score of the results put in the folder of './pysot_toolkit/results-example/ITB/transt*/*.txt' and it shows the overall results and the results of each scenario.
Acknowledgement
We select several sequences with the hightest quality score (defined in the paper) from existing tracking datasets including OTB2015, NFS, UAV123, NUS-PRO, VisDrone, and LaSOT. Many thanks to their great work!
- [OTB2015 ] Object track-ing benchmark. Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. IEEE TPAMI, 2015.
- [ NFS ] Need for speed: A benchmark for higher frame rate object tracking. Kiani Galoogahi, Hamed and Fagg, et al. ICCV 2017.
- [ UAV123 ] A benchmark and simulator for uav tracking. Mueller, Matthias and Smith, Neil and Ghanem, Bernard. ECCV 2016.
- [NUS-PRO ] Nus-pro: A new visual tracking challenge. Annan Li, Min Lin, Yi Wu, Ming-Hsuan Yang, Shuicheng Yan. PAMI 2015.
- [VisDrone] Visdrone-det2018: The vision meets drone object detection in image challenge results. Pengfei Zhu, Longyin Wen, et al. ECCVW 2018.
- [ LaSOT ] Lasot: A high-quality benchmark for large-scale single object tracking. Heng Fan, Liting Lin, et al. CVPR 2019.
Contact
If you have any questions about this benchmark, please feel free to contact Xin Li at [email protected].