The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

Overview

OC-SORT

arXiv License: MIT test

Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes and when objects are in non-linear motion. It is designed by recognizing and fixing limitations in Kalman filter and SORT. It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, Simple, Online and Real-time.

News

  • [04/27/2022]: Support intergration with BYTE and multiple cost metrics, such as GIoU, CIoU, etc.
  • [04/02/2022]: A preview version is released after a primary cleanup and refactor.
  • [03/27/2022]: The arxiv preprint of OC-SORT is released.

Benchmark Performance

PWC PWC PWC PWC PWC

Dataset HOTA AssA IDF1 MOTA FP FN IDs Frag
MOT17 (private) 63.2 63.2 77.5 78.0 15,129 107,055 1,950 2,040
MOT17 (public) 52.4 57.6 65.1 58.2 4,379 230,449 784 2,006
MOT20 (private) 62.4 62.5 76.4 75.9 20,218 103,791 938 1,004
MOT20 (public) 54.3 59.5 67.0 59.9 4,434 202,502 554 2,345
KITTI-cars 76.5 76.4 - 90.3 2,685 407 250 280
KITTI-pedestrian 54.7 59.1 - 65.1 6,422 1,443 204 609
DanceTrack-test 55.1 38.0 54.2 89.4 114,107 139,083 1,992 3,838
CroHD HeadTrack 44.1 - 62.9 67.9 102,050 164,090 4,243 10,122
  • Results are from reusing detections of previous methods and shared hyper-parameters. Tune the implementation adaptive to datasets may get higher performance.

  • The inference speed is ~28FPS by a RTX 2080Ti GPU. If the detections are provided, the inference speed of OC-SORT association is 700FPS by a i9-3.0GHz CPU.

  • A sample from DanceTrack-test set is as below and more visualizatiosn are available on Google Drive

Get Started

  • See INSTALL.md for instructions of installing required components.

  • See GET_STARTED.md for how to get started with OC-SORT.

  • See MODEL_ZOO.md for available YOLOX weights.

  • See DEPLOY.md for deployment support over ONNX, TensorRT and ncnn.

Demo

To run the tracker on a provided demo video from Youtube:

python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/ocsort_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4

Roadmap

We are still actively updating OC-SORT. We always welcome contributions to make it better for the community. We have some high-priorty to-dos as below:

  • Add more asssocitaion cost choices: GIoU, CIoU, etc.
  • Support OC-SORT in mmtracking.
  • Add more deployment options and improve the inference speed.
  • Make OC-SORT adaptive to customized detector.

Acknowledgement and Citation

The codebase is built highly upon YOLOX, filterpy, and ByteTrack. We thank their wondeful works. OC-SORT, filterpy and ByteTrack are available under MIT License. And YOLOX uses Apache License 2.0 License.

If you find this work useful, please consider to cite our paper:

@article{cao2022observation,
  title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
  author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
  journal={arXiv preprint arXiv:2203.14360},
  year={2022}
}
Issues
  • Evaluation DanceTrack

    Evaluation DanceTrack

    Hello, I ran the evaluation code "python tools/run_ocsort_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --expn /output", and got the dancetrack0004.txt like this: image "1,3.0,772.2,309.5,261.9,767.8,-1,-1,-1,-1 1,2.0,969.3,414.5,299.7,579.2,-1,-1,-1,-1" what are the means of these numbers, and how can i use them?

    opened by iTruffle 15
  • update ocsort with BYTE

    update ocsort with BYTE

    Update OC-SORT with BYTE in ByteTrack.

    With run_ocsort_dance_BYTE.py, pretrained model and default settings, you can simply get both higher MOTA and HOTA than original OC-SORT and ByteTrack.

    image

    opened by HanGuangXin 8
  • demo video is not works

    demo video is not works

    when i run the demo given in README, i can not find the pretrained model "bytetrack_dance_model.pth.tar", and i used "ocsort_dancetrack.pth.tar" given in model zoo, python tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/bytetrack_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4, but the result has no any bounding boxes?

    opened by MargeryLab 7
  • AssertionError

    AssertionError

    I got errors like these: image

    my environment like this: pytorch==1.7.1 python==3.8 cuda==11.0

    and i run this: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

    opened by iTruffle 6
  • ModuleNotFoundError: No module named 'yolox'

    ModuleNotFoundError: No module named 'yolox'

    I successfully ran the code last week. But i failed today. I strcutly follow the readme, and installed libglib2.0-dev. I want to test the video you provided.

    python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/bytetrack_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4

    And i got an error like this: image I also got this error last week, and change the path to solve it, and then i got other errors. I do not want to modify this code. Why is it happen? What should i do to sovle it? image

    opened by iTruffle 5
  • speed_direction_batch() at the beginning of tracking

    speed_direction_batch() at the beginning of tracking

    Thanks for the amazing work!

    But I find something confusing when using oc-sort. At the beginning of tracking (like frame 2), the input variables of function speed_direction_batch() is detections and previous_obs.

    But the previous_obs is [-1, -1, -1, -1, -1], which I don't think is normal.

    Is it an issue, or I should just ignore it?

    opened by HanGuangXin 5
  • TrackEval for mot_challenge

    TrackEval for mot_challenge

    Hello, thank you very much for your excellent work. I got TrackEvalto evaluate mot_challenge. But TrackEval just have python files and samples for train sets, do not have files or samples for test sets. Would i get your python project for evaluating mot_challenge?

    opened by iTruffle 4
  • Problem with interpolation.py

    Problem with interpolation.py

    Hi,

    I've been trying to use dti() in interpolation.py with the results I get from demo_track.py, but I think this function has a problem - on line 78 of interpolation.py, I believe n_frame = tracklet.shape[0] will always return 1. Therefore I don't think the following code is ever run. And when I have tried to change this, e.g. n_frame = int(tracklet[:,0]) , I run into different problems, e.g. I think the score threshold doesn't work because in demo_track.py the score is always written as 1.0 - results.append(f"{frame_id},{tid},{tlwh[0]:.2f},{tlwh[1]:.2f},{tlwh[2]:.2f},{tlwh[3]:.2f},1.0,-1,-1,-1\n")) .

    If you could tell me how to usethis code with the text files output by demo_track I would be very grateful.

    Thanks

    opened by ajwl27 4
  • Why multiply scores?

    Why multiply scores?

    作者您好,很感谢您开源这么优异的跟踪器。在阅读您的代码时,我注意到您在处理angle_diff_cost时,将其与scores相乘。这个操作我不太理解,如果您方便的话,请您给讲解一下~ 另外,您在处理IoU距离时,将其resize到[0,1]。diff_angle值域是[-0.5,0.5],由于乘系数vdc_weight 0.2,所以值域变为[-0.1, 0.1]。那么,最终两个框之间的距离在区间[-0.1, 1.1],请问我这样理解对吗?期待您的回复,感谢! image

    opened by majx1997 4
  • Docker build failed...

    Docker build failed...

    when I tried [2. Docker build] docker build -t ocsort:latest . The following problem occurred.

    Processing triggers for sgml-base (1.29.1) ...

    WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

    Removing intermediate container 67ee1859253d ---> 686356aae519 Step 6/11 : RUN git clone https://github.com/noahcao/OC_SORT && cd OC_SORT && git checkout 3434c5e8bc6a5ae8ad530528ba8d9a431967f237 && mkdir -p YOLOX_outputs/yolox_x_mix_det/track_vis && sed -i 's/torch>=1.7/torch==1.9.1+cu111/g' requirements.txt && sed -i 's/torchvision==0.10.0/torchvision==0.10.1+cu111/g' requirements.txt && sed -i "s/'cuda'/0/g" tools/demo_track.py && pip3 install pip --upgrade && pip3 install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html && python3 setup.py develop && pip3 install cython && pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' && pip3 install cython_bbox gdown && ldconfig && pip cache purge ---> Running in 66120b5d54d0

    Cloning into 'OC_SORT'... fatal: reference is not a tree: 3434c5e8bc6a5ae8ad530528ba8d9a431967f237

    The command '/bin/sh -c git clone https://github.com/noahcao/OC_SORT && cd OC_SORT && git checkout 3434c5e8bc6a5ae8ad530528ba8d9a431967f237 && mkdir -p YOLOX_outputs/yolox_x_mix_det/track_vis && sed -i 's/torch>=1.7/torch==1.9.1+cu111/g' requirements.txt && sed -i 's/torchvision==0.10.0/torchvision==0.10.1+cu111/g' requirements.txt && sed -i "s/'cuda'/0/g" tools/demo_track.py && pip3 install pip --upgrade && pip3 install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html && python3 setup.py develop && pip3 install cython && pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' && pip3 install cython_bbox gdown && ldconfig && pip cache purge' returned a non-zero code: 128

    If you have encountered a similar problem or know a solution, please comment.

    opened by Seoung-wook 4
  • which version of gcc used?

    which version of gcc used?

    When I run python setup.py develop, a problem occurs.as: g++: error: /media/peng/Data/Myproject/Chapter2-Track/code/run/OC_SORT-master/build/temp.linux-x86_64-3.6/media/peng/Data/Myproject/Chapter2-Track/code/run/OC_SORT-master/yolox/layers/csrc/vision.o: No such file or directory g++: error: /media/peng/Data/Myproject/Chapter2-Track/code/run/OC_SORT-master/build/temp.linux-x86_64-3.6/media/peng/Data/Myproject/Chapter2-Track/code/run/OC_SORT-master/yolox/layers/csrc/cocoeval/cocoeval.o: No such file or directory error: command 'g++' failed with exit status 1

    I used gcc 5.4 version. I haven't solved this problem.

    opened by echo-sky 4
  • OC_Sort Vs. ByteTracker Inference Speed

    OC_Sort Vs. ByteTracker Inference Speed

    Hello @noahcao,

    Thanks for sharing the code. I have a question regard to speed of OC_Sort. Im using YoloV5 as a object detector for both OC_Sort and ByteTracker tracking algorithm with the same setting. However, all the process for one frame (Detection and tracking) took almost ~14ms for ByteTracker and ~45ms for OC_Sort. I would like to ask you how can I speed up the OC_Sort (Based on your paper, OC_Sort should be faster than ByteTrack)?

    Thanks!

    opened by Rm1n90 0
  • Performance improvement with Online smoothing (OOS)

    Performance improvement with Online smoothing (OOS)

    First of all, thank you for providing a valuable contribution to the Computer Vision community. It was interesting to read your paper and also thank you for providing your implementation. I have a few doubts to clarify,

    1. From your paper in Table 7, you show that OOS strategy does not show much improvement for DanceTrack and a significant boost in HOTA for MOT17. Could you provide a reasoning for this? I am trying to understand the impact of the three strategies you provided (OOS, OCM, OCR).
    2. I was able to visually understand the impacts of OCM and OCR by removing the components and evaluating on DanceTrack. I could not understand the same for OOS. I did not visually see any difference when using OOS or not. I understand intuitively (theoretically) its impact. I was wondering do you have example videos where you visually saw the impact of OOS.
    3. What are your thoughts when we use a less powerful detector? I tried using mobilenet+ssd (tflite version) from TFHub and the performance was not as expected. It was also the case for SORT. What are your suggestions in cases where the detector is not very powerful?
    opened by AbinayaKumar25 0
  • TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    @noahcao Hi! when I run : python3 tools/run_ocsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse --expn MOT17

    the error is :

    File "tools/run_ocsort.py", line 32, in compare_dataframes accs.append(mmp.utils.compare_to_groundtruth(gts[k], tsacc, 'iou', distth=0.5,vflag = vflag)) TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    opened by Richard1121 0
  • Inference speed

    Inference speed

    @noahcao Hi! Setting self.data_num_workers=4, but it doesn't seem to work when running run_ocsort_dance.py (evaluate on dancetrack_val, large datasets), and it is very fast when running run_ocsort.py (evaluate on mot17_val_half, small datasets). Why is this? How can I fix this? image

    opened by yanghaibin-cool 0
  • AssertionError: plz provide exp file or exp name.

    AssertionError: plz provide exp file or exp name.

    When I tried python3 tools/demo_track.py video -f exps/example/mot/yolox_s_mix_det.py --trt --save_result, The error is occured.

    Traceback (most recent call last): File "tools/demo_track.py", line 280, in exp = get_exp(args.exp_file, args.name) File "/workspace/OC_SORT/yolox/exp/build.py", line 47, in get_exp assert ( AssertionError: plz provide exp file or exp name.

    Forcibly entering exp name did not solve the problem. ex)args.exp_file ='~~~~.pth'

    If anyone knows a solution, please share.

    opened by Seoung-wook 2
Owner
Jinkun Cao
Do something interesting and useful
Jinkun Cao
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 7 Apr 21, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 May 11, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 3 May 10, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 43 Jun 15, 2022
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 21 Jun 9, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 2 Dec 7, 2021
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 432 Jun 20, 2022
LBK 21 May 19, 2022
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 75 Jun 20, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 288 Jun 16, 2022
Official code for 'Robust Siamese Object Tracking for Unmanned Aerial Manipulator' and offical introduction to UAMT100 benchmark

SiamSA: Robust Siamese Object Tracking for Unmanned Aerial Manipulator Demo video ?? Our video on Youtube and bilibili demonstrates the evaluation of

Intelligent Vision for Robotics in Complex Environment 18 Jan 31, 2022
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 5, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.5k Jun 29, 2022
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 107 Jun 20, 2022
A motion tracking system for any arbitaray points in a video frame.

PointTracking This code is written by Majid Masoumi @ [email protected] I have used lucas kanade optical flow technique to track the points b

Dr. Majid Masoumi 1 Feb 9, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 2 Jun 4, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

null 34 Jun 23, 2022