Visual Tracking by TridentAlign and Context Embedding (TACT)
Test code for "Visual Tracking by TridentAlign and Context Embedding"
Janghoon Choi, Junseok Kwon, and Kyoung Mu Lee
Overall Framework
Results on LaSOT test set
- Link to LaSOT dataset
- Raw results available on Google drive
Dependencies
- Ubuntu 18.04
- Python==2.7.17
- numpy==1.16.5
- pytorch==1.3.0
- matplotlib==2.2.4
- opencv==4.1.0.25
- moviepy==1.0.0
- tqdm==4.32.1
Usage
Prerequisites
- Download network weights from Google drive
- Copy network weight files
ckpt_res18.tar
andckpt_res50.tar
tockpt/
folder - Choose between
TACT-18
andTACT-50
by modifying thecfgs/cfg_test.py
file (default:TACT-50
)
To test tracker on LaSOT test set
- Download LaSOT dataset from link
- Modify
cfgs/cfg_test.py
file to localLaSOTBenchmark
folder path - Run
python test_tracker.py
To test tracker on an arbitrary sequence
- Using
run_track_seq()
function intracker_batch.py
, tracker can run on an arbitrary sequence - Provide the function with following variables
seq_name
: name of the given sequenceseq_path
: path to the given sequenceseq_imlist
: list of image file names of the given sequenceseq_gt
: ground truth box annotations of the given sequence (may only contain annotation for initial frame,[x_min,y_min,width,height]
format)
Raw results on other datasets
- Link to raw results on Google drive
- Results for test sets of LaSOT, OxUvA, GOT-10k, TrackingNet
Citation
If you find our work useful for your research, please consider citing the following paper:
@article{choi2020tact,
title={Visual tracking by tridentalign and context embedding},
author={Choi, Janghoon and Kwon, Junseok and Lee, Kyoung Mu},
journal={arXiv preprint arXiv:2007.06887},
year={2020}
}