The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

Overview

OC-SORT

arXiv License: MIT test

Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes and when objects are in non-linear motion. It is designed by recognizing and fixing limitations in Kalman filter and SORT. It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, Simple, Online and Real-time.

News

  • [04/27/2022]: Support intergration with BYTE and multiple cost metrics, such as GIoU, CIoU, etc.
  • [04/02/2022]: A preview version is released after a primary cleanup and refactor.
  • [03/27/2022]: The arxiv preprint of OC-SORT is released.

Benchmark Performance

PWC PWC PWC PWC PWC

Dataset HOTA AssA IDF1 MOTA FP FN IDs Frag
MOT17 (private) 63.2 63.2 77.5 78.0 15,129 107,055 1,950 2,040
MOT17 (public) 52.4 57.6 65.1 58.2 4,379 230,449 784 2,006
MOT20 (private) 62.4 62.5 76.4 75.9 20,218 103,791 938 1,004
MOT20 (public) 54.3 59.5 67.0 59.9 4,434 202,502 554 2,345
KITTI-cars 76.5 76.4 - 90.3 2,685 407 250 280
KITTI-pedestrian 54.7 59.1 - 65.1 6,422 1,443 204 609
DanceTrack-test 55.1 38.0 54.2 89.4 114,107 139,083 1,992 3,838
CroHD HeadTrack 44.1 - 62.9 67.9 102,050 164,090 4,243 10,122
  • Results are from reusing detections of previous methods and shared hyper-parameters. Tune the implementation adaptive to datasets may get higher performance.

  • The inference speed is ~28FPS by a RTX 2080Ti GPU. If the detections are provided, the inference speed of OC-SORT association is 700FPS by a i9-3.0GHz CPU.

  • A sample from DanceTrack-test set is as below and more visualizatiosn are available on Google Drive

Get Started

  • See INSTALL.md for instructions of installing required components.

  • See GET_STARTED.md for how to get started with OC-SORT.

  • See MODEL_ZOO.md for available YOLOX weights.

  • See DEPLOY.md for deployment support over ONNX, TensorRT and ncnn.

Demo

To run the tracker on a provided demo video from Youtube:

python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/ocsort_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4

Roadmap

We are still actively updating OC-SORT. We always welcome contributions to make it better for the community. We have some high-priorty to-dos as below:

  • Add more asssocitaion cost choices: GIoU, CIoU, etc.
  • Support OC-SORT in mmtracking.
  • Add more deployment options and improve the inference speed.
  • Make OC-SORT adaptive to customized detector.

Acknowledgement and Citation

The codebase is built highly upon YOLOX, filterpy, and ByteTrack. We thank their wondeful works. OC-SORT, filterpy and ByteTrack are available under MIT License. And YOLOX uses Apache License 2.0 License.

If you find this work useful, please consider to cite our paper:

@article{cao2022observation,
  title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
  author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
  journal={arXiv preprint arXiv:2203.14360},
  year={2022}
}
Comments
  • OCSORT + ByteTrack?

    OCSORT + ByteTrack?

    Thanks for the amazing work again!

    After replacing the SORT kalman filter in ocsort.py with the JDE kalman filter, I got higher HOTA and faster speed, which may indicates that ocsort with SORT settings can be improved.

    So, do you plan to provide a version of ocsort with BYTE?

    opened by HanGuangXin 16
  • Evaluation DanceTrack

    Evaluation DanceTrack

    Hello, I ran the evaluation code "python tools/run_ocsort_dance.py -f exps/example/mot/yolox_dancetrack_val.py -c pretrained/bytetrack_dance_model.pth.tar -b 1 -d 1 --fp16 --fuse --expn /output", and got the dancetrack0004.txt like this: image "1,3.0,772.2,309.5,261.9,767.8,-1,-1,-1,-1 1,2.0,969.3,414.5,299.7,579.2,-1,-1,-1,-1" what are the means of these numbers, and how can i use them?

    opened by iTruffle 15
  • Evaluate KITTI test

    Evaluate KITTI test

    Dear @noahcao , How you can evaluate the result of KITTI test set? I submit to KITTI tracking benchmark but it is not in right format. Hope to receive your answer soon. BRs, cuong

    opened by cuonga1cvp 12
  • update ocsort with BYTE

    update ocsort with BYTE

    Update OC-SORT with BYTE in ByteTrack.

    With run_ocsort_dance_BYTE.py, pretrained model and default settings, you can simply get both higher MOTA and HOTA than original OC-SORT and ByteTrack.

    image

    opened by HanGuangXin 8
  • [Question] Is there currently a way to replace the Yolox detector with another detector?

    [Question] Is there currently a way to replace the Yolox detector with another detector?

    I want to try and run OC-SORT with a few different detectors (YoloV5, F-RCNN, CNN) just to experiment and see how well the detectors perform with OC-SORT.

    opened by aelahi23 7
  • demo video is not works

    demo video is not works

    when i run the demo given in README, i can not find the pretrained model "bytetrack_dance_model.pth.tar", and i used "ocsort_dancetrack.pth.tar" given in model zoo, python tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/bytetrack_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4, but the result has no any bounding boxes?

    opened by MargeryLab 7
  • AssertionError

    AssertionError

    I got errors like these: image

    my environment like this: pytorch==1.7.1 python==3.8 cuda==11.0

    and i run this: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

    opened by iTruffle 6
  • AssertionError in train MOT20

    AssertionError in train MOT20

    I can successfully run evaluation code for MOT17 and MOT20 and video demo. But I got an error in training for MOT20, using 'python tools/train.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -d 1 -b 48 --fp16 -o -c pretrained/yolox_x.pth', like this: image

    opened by iTruffle 5
  • ModuleNotFoundError: No module named 'yolox'

    ModuleNotFoundError: No module named 'yolox'

    I successfully ran the code last week. But i failed today. I strcutly follow the readme, and installed libglib2.0-dev. I want to test the video you provided.

    python3 tools/demo_track.py --demo_type video -f exps/example/mot/yolox_dancetrack_test.py -c pretrained/bytetrack_dance_model.pth.tar --path videos/dance_demo.mp4 --fp16 --fuse --save_result --out_path demo_out.mp4

    And i got an error like this: image I also got this error last week, and change the path to solve it, and then i got other errors. I do not want to modify this code. Why is it happen? What should i do to sovle it? image

    opened by iTruffle 5
  • speed_direction_batch() at the beginning of tracking

    speed_direction_batch() at the beginning of tracking

    Thanks for the amazing work!

    But I find something confusing when using oc-sort. At the beginning of tracking (like frame 2), the input variables of function speed_direction_batch() is detections and previous_obs.

    But the previous_obs is [-1, -1, -1, -1, -1], which I don't think is normal.

    Is it an issue, or I should just ignore it?

    opened by HanGuangXin 5
  • TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    @noahcao Hi! when I run : python3 tools/run_ocsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse --expn MOT17

    the error is :

    File "tools/run_ocsort.py", line 32, in compare_dataframes accs.append(mmp.utils.compare_to_groundtruth(gts[k], tsacc, 'iou', distth=0.5,vflag = vflag)) TypeError: compare_to_groundtruth() got an unexpected keyword argument 'vflag'

    opened by Richard1121 4
  • is it possible to provide MOT20 track results?

    is it possible to provide MOT20 track results?

    I generated track results using OC-SORT, but facing problem when submitted results to motchallenge server. submission happens successfully but unable to get evaluation results. if you provide your results i can check the issue.

    opened by ssbilakeri 0
  • Ablation study

    Ablation study

    Hi, thanks for your great work.

    I've tried to conduct ablation study of OOS/OCM/OCR on MOT17-val. However, in my experiments, OOS doesn't work.

    I turn off "OOS" by set orig=True in class KalmanBoxTracker:

    class KalmanBoxTracker(object):
        """
        This class represents the internal state of individual tracked objects observed as bbox.
        """
        count = 0
    
        def __init__(self, bbox, delta_t=3, orig=False):
            """
            Initialises a tracker using initial bounding box.
    
            """
            orig = True  # <= I added this line
            # define constant velocity model
            ...
    

    And my results is (HOTA/MOTA/IDF1):

    • with OOS: 66.341 / 74.504 / 77.874
    • w/o OOS: 66.442 / 74.665 / 77.864

    Could you tell me if it is because of my mistakes or other reasons? Thanks!

    opened by dyhBUPT 0
  • Matlab Wrapper Code Available

    Matlab Wrapper Code Available

    Hi,

    I needed to run OC-SORT from MATLAB with a matrix of detections and get a matrix of tracks back. I wrote a simple wrapper function based on your code and it seems to be working. I needed to make a minor change to the init function of the OCSORT class and I changed the result writing portion as well. I seem to be getting the same results as your original script, so I think it is working fine. Would you like me to share the code with you?

    opened by JLJ19 3
Owner
Jinkun Cao
Do something interesting and useful
Jinkun Cao
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 9, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 44 Sep 15, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 5, 2022
LBK 35 Dec 26, 2022
Official code for 'Robust Siamese Object Tracking for Unmanned Aerial Manipulator' and offical introduction to UAMT100 benchmark

SiamSA: Robust Siamese Object Tracking for Unmanned Aerial Manipulator Demo video ?? Our video on Youtube and bilibili demonstrates the evaluation of

Intelligent Vision for Robotics in Complex Environment 12 Dec 18, 2022
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 92 Dec 13, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 304 Jan 3, 2023
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
A motion tracking system for any arbitaray points in a video frame.

PointTracking This code is written by Majid Masoumi @ [email protected] I have used lucas kanade optical flow technique to track the points b

Dr. Majid Masoumi 1 Feb 9, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

null 50 Dec 17, 2022