Python Single Object Tracking Evaluation

Overview

pysot-toolkit

The purpose of this repo is to provide evaluation API of Current Single Object Tracking Dataset, including

Install

git clone https://github.com/StrangerZhang/pysot-toolkit
pip install -r requirements.txt
cd pysot/utils/
python setup.py build_ext --inplace
# if you need to draw graph, you need latex installed on your system

Download Dataset

Download json files used in our toolkit baidu pan or Google Drive

  1. Put CVRP13.json, OTB100.json, OTB50.json in OTB100 dataset directory (you need to copy Jogging to Jogging-1 and Jogging-2, and copy Skating2 to Skating2-1 and Skating2-2 or using softlink)

    The directory should have the below format

    | -- OTB100/

    ​ | -- Basketball

    ​ | ......

    ​ | -- Woman

    ​ | -- OTB100.json

    ​ | -- OTB50.json

    ​ | -- CVPR13.json

  2. Put all other jsons in the dataset directory like in step 1

Usage

1. Evaluation on VOT2018(VOT2016)

cd /path/to/pysot-toolkit
python bin/eval.py \
	--dataset_dir /path/to/dataset/root \		# dataset path
	--dataset VOT2018 \				# dataset name(VOT2018, VOT2016)
	--tracker_result_dir /path/to/tracker/dir \	# tracker dir
	--trackers ECO UPDT SiamRPNpp 			# tracker names 

# you will see
------------------------------------------------------------
|Tracker Name| Accuracy | Robustness | Lost Number |  EAO  |
------------------------------------------------------------
| SiamRPNpp  |  0.600   |   0.234    |    50.0     | 0.415 |
|    UPDT    |  0.536   |   0.184    |    39.2     | 0.378 |
|    ECO     |  0.484   |   0.276    |    59.0     | 0.280 |
------------------------------------------------------------

2. Evaluation on OTB100(UAV123, NFS, LaSOT)

converted *.txt tracking results will be released soon

cd /path/to/pysot-toolkit
python bin/eval.py \
	--dataset_dir /path/to/dataset/root \		# dataset path
	--dataset OTB100 \				# dataset name(OTB100, UAV123, NFS, LaSOT)
	--tracker_result_dir /path/to/tracker/dir \	# tracker dir
	--trackers SiamRPN++ C-COT DaSiamRPN ECO  \	# tracker names 
	--num 4 \				  	# evaluation thread
	--show_video_level \ 	  			# wether to show video results
	--vis 					  	# draw graph

# you will see (Normalized Precision not used in OTB evaluation)
-----------------------------------------------------
|Tracker name| Success | Norm Precision | Precision |
-----------------------------------------------------
| SiamRPN++  |  0.696  |     0.000      |   0.914   |
|    ECO     |  0.691  |     0.000      |   0.910   |
|   C-COT    |  0.671  |     0.000      |   0.898   |
| DaSiamRPN  |  0.658  |     0.000      |   0.880   |
-----------------------------------------------------

-----------------------------------------------------------------------------------------
|    Tracker name     |      SiamRPN++      |      DaSiamRPN      |         ECO         |
-----------------------------------------------------------------------------------------
|     Video name      | success | precision | success | precision | success | precision |
-----------------------------------------------------------------------------------------
|     Basketball      |  0.423  |   0.555   |  0.677  |   0.865   |  0.653  |   0.800   |
|        Biker        |  0.728  |   0.932   |  0.319  |   0.448   |  0.506  |   0.832   |
|        Bird1        |  0.207  |   0.360   |  0.274  |   0.508   |  0.192  |   0.302   |
|        Bird2        |  0.629  |   0.742   |  0.604  |   0.697   |  0.775  |   0.882   |
|      BlurBody       |  0.823  |   0.879   |  0.759  |   0.767   |  0.713  |   0.894   |
|      BlurCar1       |  0.803  |   0.917   |  0.837  |   0.895   |  0.851  |   0.934   |
|      BlurCar2       |  0.864  |   0.926   |  0.794  |   0.872   |  0.883  |   0.931   |
......
|        Vase         |  0.564  |   0.698   |  0.554  |   0.742   |  0.544  |   0.752   |
|       Walking       |  0.761  |   0.956   |  0.745  |   0.932   |  0.709  |   0.955   |
|      Walking2       |  0.362  |   0.476   |  0.263  |   0.371   |  0.793  |   0.941   |
|        Woman        |  0.615  |   0.908   |  0.648  |   0.887   |  0.771  |   0.936   |
-----------------------------------------------------------------------------------------
OTB100 Success Plot OTB100 Precision Plot

3. Evaluation on VOT2018-LT

cd /path/to/pysot-toolkit
python bin/eval.py \
	--dataset_dir /path/to/dataset/root \		# dataset path
	--dataset VOT2018-LT \				# dataset name
	--tracker_result_dir /path/to/tracker/dir \	# tracker dir
	--trackers SiamRPN++ MBMD DaSiam-LT \		# tracker names 
	--num 4 \				  	# evaluation thread
	--vis \					  	# wether to draw graph

# you will see
-------------------------------------------
|Tracker Name| Precision | Recall |  F1   |
-------------------------------------------
| SiamRPN++  |   0.649   | 0.610  | 0.629 |
|    MBMD    |   0.634   | 0.588  | 0.610 |
| DaSiam-LT  |   0.627   | 0.588  | 0.607 |
|    MMLT    |   0.574   | 0.521  | 0.546 |
|  FuCoLoT   |   0.538   | 0.432  | 0.479 |
|  SiamVGG   |   0.552   | 0.393  | 0.459 |
|   SiamFC   |   0.600   | 0.334  | 0.429 |
-------------------------------------------

Get Tracking Results of Your Own Tracker

Add pysot-toolkit to your PYTHONPATH

export PYTHONPATH=/path/to/pysot-toolkit:$PYTHONPATH

1. OPE (One Pass Evaluation)

from pysot.datasets import DatasetFactory

dataset = DatasetFactory.create_dataset(name=dataset_name,
                                       	dataset_root=datset_root,
                                        load_img=False)
for video in dataset:
    for idx, (img, gt_bbox) in enumerate(video):
        if idx == 0:
            # init your tracker here
        else:
            # get tracking result here

2. Restarted Evaluation

from pysot.datasets import DatasetFactory
from pysot.utils.region import vot_overlap

dataset = DatasetFactory.create_dataset(name=dataset_name,
                                       	dataset_root=datset_root,
                                        load_img=False)
frame_counter = 0
pred_bboxes = []
for video in dataset:
    for idx, (img, gt_bbox) in enumerate(video):
        if idx == frame_counter:
            # init your tracker here
            pred_bbox.append(1)
        elif idx > frame_counter:
            # get tracking result here
            pred_bbox = 
            overlap = vot_overlap(pred_bbox, gt_bbox, (img.shape[1], img.shape[0]))
            if overlap > 0: 
	    	# continue tracking
                pred_bboxes.append(pred_bbox)
            else: 
	    	# lost target, restart
                pred_bboxes.append(2)
                frame_counter = idx + 5
        else:
            pred_bboxes.append(0)
Comments
  • Problem about ‘SyntaxError: invalid syntax’ when running 'eval.py'

    Problem about ‘SyntaxError: invalid syntax’ when running 'eval.py'

    I run 'eval.py' and it shows following error: Traceback (most recent call last): File "bin/eval.py", line 12, in from pysot.evaluation import OPEBenchmark, AccuracyRobustnessBenchmark, EAOBenchmark, F1Benchmark File ".\pysot\evaluation_init_.py", line 1, in from .ar_benchmark import AccuracyRobustnessBenchmark File ".\pysot\evaluation\ar_benchmark.py", line 107 row += f'{Fore.RED}{accuracy_str}{Style.RESET_ALL}|' ^ SyntaxError: invalid syntax

    could anyone help me? pls. Is it related to python verison?

    opened by universefall 9
  • Number of frames discrepancy for video `Tiger1` in OTB100

    Number of frames discrepancy for video `Tiger1` in OTB100

    The original groundtruth_rect.txt file for video Tiger1 contains 354 frames, but OTB100.json hosted on baidu pan contains only 349 frames for it. Is there an update to the original annotations?

    opened by bkkm78 2
  • How to evaluate trackers?

    How to evaluate trackers?

    In "Get Tracking Results of Your Own Tracker" part, the tracking results are saved in pred_bboxes, what's next? How to save these results, and in what format? Thanks

    opened by TSSSS4 2
  • how can i draw Precision plots , there is only Success plots that i can draw

    how can i draw Precision plots , there is only Success plots that i can draw

    i have run these code

    python bin/eval.py
    --dataset_dir /path/to/dataset/root \ # dataset path --dataset OTB100 \ # dataset name(OTB100, UAV123, NFS, LaSOT) --tracker_result_dir /path/to/tracker/dir \ # tracker dir --trackers SiamRPN++ C-COT DaSiamRPN ECO \ # tracker names --num 4 \ # evaluation thread --show_video_level \ # wether to show video results --vis # draw graph

    only 1 picture has showup Figure_1

    opened by EEEEEREN 1
  • A small bug in the README.MD

    A small bug in the README.MD

    In the part of Download Dataset, the correct name of Google netdisk is Google Drive, instead of Google Driver. Actually, this problem is so small that it doesn't influence us to understand the meaning.

    opened by fzh0917 1
  • python setup.py build_ext --inplace error安装过程错误

    python setup.py build_ext --inplace error安装过程错误

    hello when I executed this "python setup.py build_ext --inplace",there appeared an error:"error: Error executing cmd /u /c "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 && set" can u tell me solution 我运行这句python setup.py build_ext --inplace命令时,出现了错误error: Error executing cmd /u /c "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 && set。

    opened by Janghhy 0
  • Import error

    Import error

    python bin/eval.py from pysot.datasets import VOTDataset, OTBDataset, UAVDataset, LaSOTDataset, NFSDataset, VOTLTDataset ImportError: cannot import name 'VOTDataset' from 'pysot.datasets' (/home/tttt/pysot/pysot/datasets/init.py) ? I look for option to change path.

    opened by ThomasBomjan 0
  • 如何在LaSOT上画属性雷达图

    如何在LaSOT上画属性雷达图

    你好,请问如何在LaSOT上画出属性雷达图呢, 我添加了这行代码 draw_eao(success_ret) 之后报错: raise ValueError("x and y must have same first dimension, but " ValueError: x and y must have same first dimension, but have shapes (8,) and (281, 21)

    opened by hongsheng-Z 0
  • How to eval on NEW dataset

    How to eval on NEW dataset

    Thanks for the author's efforts. Could you please tell me what I should do if I evaluate the algorithm performance on a new dataset similar to UAV123? Can you provide a Json file generated code. thank you!!

    opened by xyl-507 0
  • numba.core.errors.TypingError: Failed in nopython mode pipeline

    numba.core.errors.TypingError: Failed in nopython mode pipeline

    While I try to run the eval.py following the instructions, it gives me the numba core errors as bellow:

    multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/dlp/anaconda3/envs/pysot-toolkit/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "./pysot/evaluation/ope_benchmark.py", line 50, in eval_success success_ret_[video.name] = success_overlap(gt_traj, tracker_traj, n_frame) File "/home/dlp/anaconda3/envs/pysot-toolkit/lib/python3.7/site-packages/numba/core/dispatcher.py", line 420, in _compile_for_args error_rewrite(e, 'typing') File "/home/dlp/anaconda3/envs/pysot-toolkit/lib/python3.7/site-packages/numba/core/dispatcher.py", line 361, in error_rewrite raise e.with_traceback(None) numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) non-precise type array(pyobject, 0d, C) During: typing of argument at ./pysot/utils/statistics.py (104)

    File "pysot/utils/statistics.py", line 104: def success_overlap(gt_bb, result_bb, n_frame): thresholds_overlap = np.arange(0, 1.05, 0.05) ^

    """

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "bin/eval.py", line 40, in trackers), desc='eval success', total=len(trackers), ncols=100): File "/home/dlp/.local/lib/python3.7/site-packages/tqdm/_tqdm.py", line 1032, in iter for obj in iterable: File "/home/dlp/anaconda3/envs/pysot-toolkit/lib/python3.7/multiprocessing/pool.py", line 748, in next raise value numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) non-precise type array(pyobject, 0d, C) During: typing of argument at ./pysot/utils/statistics.py (104)

    File "pysot/utils/statistics.py", line 104: def success_overlap(gt_bb, result_bb, n_frame): thresholds_overlap = np.arange(0, 1.05, 0.05) ^

    Any idea why is this happens? Appreciate your help.

    opened by kuzhang 10
  • the results is not so correct

    the results is not so correct

    "NOTE we not use gmm to generate low, high, peak value" ------in pysot-oolkit/pysot/evaluation/eao_benchmark.py

    The result evaluated by the tookit is not so correct, because the values of "low"、“high "、"peak" is fixed, but it should be auto computed by KDE(GMM) , I will appreciate it if the author can update the compute code.

    opened by haishibei 0
Owner
Computational Advertising & Recommendation
null
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
A simple implementation of Kalman filter in single object tracking

kalman-filter-in-single-object-tracking A simple implementation of Kalman filter in single object tracking https://www.bilibili.com/video/BV1Qf4y1J7D4

null 130 Dec 26, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Tracking Pipeline helps you to solve the tracking problem more easily

Tracking_Pipeline Tracking_Pipeline helps you to solve the tracking problem more easily I integrate detection algorithms like: Yolov5, Yolov4, YoloX,

VNOpenAI 32 Dec 21, 2022
Quadruped-command-tracking-controller - Quadruped command tracking controller (flat terrain)

Quadruped command tracking controller (flat terrain) Prepare Install RAISIM link

Yunho Kim 4 Oct 20, 2022
Object detection evaluation metrics using Python.

Object detection evaluation metrics using Python.

Louis Facun 2 Sep 6, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 3, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 3, 2023