This is the repository for our paper SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking

Overview

SimpleTrack

This is the repository for our paper SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking. We are still working on writing the documentations and cleaning up the code. However, we have already moved all of our code onto the dev branch, so please feel free to check it out if you really need to delve deep recently. We will try our best to get everything ready as soon as possible.

If you find our paper or code useful for you, please consider cite us by:

@article{pang2021simpletrack,
    title={{SimpleTrack: Understanding and rethinking 3d multi-object tracking}},
    author={Pang, Ziqi and Li, Zhichao and Wang, Naiyan},
    journal={arxiv 2111.09621},
    year={2021}
}
Comments
  • Evaluation result

    Evaluation result

    Hi, thank you for sharing your code. I have test run the code on the full waymo validation dataset and using the detection you send to me(thanks for doing that) and use waymo evaluation metrics to do the evaluation. The result I got shows as following: 2022-03-03 09-54-17 的屏幕截图 But in the paper the L2 Vehicle could achieve around 0.56 in MOTA which is about 3.5% higher than my implement. 2022-03-03 09-59-18 的屏幕截图 Can I ask if the released code is of the same setting in parameter as the one in the paper and what evaluation method are you guys using. Thanks again for all your work and help.

    opened by wenyiwangbst 8
  • Run on own data or Kitti

    Run on own data or Kitti

    Hi thanks for your great works ! @winstywang @ziqipang

    I am trying to implement on my own dataset and kitti dataset. The detection results output "X, Y, Z, H, W, L, theta" and so does KITTI datasets.

    As I see in your code that MOTModel updates the detection and return tracking results. https://github.com/TuSimple/SimpleTrack/blob/main/mot_3d/mot.py

    How could I implement on own data with only detection output "X, Y, Z, H, W, L, theta" ?

    Also, how to understand the use of ego motion in the code ?

    Thanks.

    opened by russellyq 6
  • Want to share with you the result on NuScenes validation set here.  I have not changed any setting or configuration. Centerpoint input. 2Hz

    Want to share with you the result on NuScenes validation set here. I have not changed any setting or configuration. Centerpoint input. 2Hz

    Per-class results: |   -- | -- AMOTA   AMOTP   RECALL  MOTAR   GT      MOTA    MOTP    MT      ML      FAF     TP      FPFN       IDS     FRAG    TID     LGD |   bicycle         0.414   1.161   0.478   0.846   1993    0.400   0.160   39      67      10.0    943     1451041    9       8       1.59    2.02 |   bus             0.721   0.766   0.755   0.957   2112    0.718   0.388   60      13      4.3     1584    68518      10      17      1.40    1.91 |   car             0.637   0.828   0.700   0.935   58317   0.636   0.241   1714    835     44.6    39656   2564       17504   1157    1019    0.97    1.56 |   motorcy         0.524   1.013   0.583   0.907   1977    0.516   0.243   36      25      7.7     1124    104824     29      29      1.69    2.35 |   pedestr         0.688   0.737   0.773   0.873   25423   0.654   0.291   983     292     55.4    19049   2424       5778    596     466     0.63    1.14 |   trailer         0.384   1.314   0.506   0.711   2425    0.352   0.526   44      56      32.5    1201    3471199    25      31      1.62    2.45 |   truck           0.475   1.104   0.554   0.822   9650    0.446   0.323   149     192     25.1    5238    9304300    112     120     1.70    2.67 |     |   Aggregated results: |   AMOTA   0.549 |   AMOTP   0.989 |   RECALL  0.621 |   MOTAR   0.865 |   GT      14556 |   MOTA    0.532 |   MOTP    0.310 |   MT      3025 |   ML      1480 |   FAF     25.7 |   TP      68795 |   FP      6582 |   FN      31164 |   IDS     1938 |   FRAG    1690 |   TID     1.37 |   LGD     2.01 |  

    opened by BaiLiping 5
  • Error during exporting detection

    Error during exporting detection

    Hi, I use center point to generate output a bin output as well. I try to use the detection.py to export the detection information but return error during saving. I look at the code and seems the frame_number is always None. So ts_info[segment_name][_j] is never equal to current object's time_stamp. Can I ask how to solve this problem? Thank you so much for make the code public and for helping 2022-02-18 17-43-29 的屏幕截图

    opened by wenyiwangbst 4
  • threshold_high is never used in NMS?

    threshold_high is never used in NMS?

    Hi Ziqi,

    I have one simple question regarding the NMS. Because the threshold_high is set to 1.0, it seems that the block of code below is never used in NMS: https://github.com/TuSimple/SimpleTrack/blob/3b44d6d197b06501f01b939e2f1da764e44ac5dd/mot_3d/preprocessing/nms.py#L47-L71 Could you please illustrate a bit more how and where the above code is useful?

    Thanks!

    opened by xinshuoweng 3
  • The asso_thres in waymo config file is different to the value in paper

    The asso_thres in waymo config file is different to the value in paper

    Hi, in the paper the asso_thres is set as -0.5. However, I find the value is set as 1.5 in the config file. Is there any information I missing? asso_thres: iou: 0.9 giou: 1.5

    opened by sjtuljw520 2
  • constant velocity model

    constant velocity model

    Thanks for your wonderful work! Do you have the code for the constant velocity model on Nuscenes dataset? I tried to implement the constant velocity model but cannot reproduce the results. Is there any preprocessing of the velocity or time stamp? Thank you very much!

    opened by ifzhang 2
  • Question about config on nuscenes

    Question about config on nuscenes

    Hello, It seems that the data association is "one-stage" just using the high confidence threshold according to the configure file on nuscenes(the redundancy mod is default). Does it mean "one-stage" data association have best performance on nuscenes, rather than "two-stage" association which is used on waymo dataset? So Could you explain why, Is it related to frequency of Datasets? Thank you

    opened by liuchenxv 2
  • Ground Truth Information

    Ground Truth Information

    Hi, First of all, thank you for sharing your code.

    I have been trying to preprocess the WOD data but I am stuck on the second step (decode the ground truth), where I am getting the following error: Traceback (most recent call last): File "gt_bin_decode.py", line 129, in <module> main(args.file_path, out_folder, args.data_folder) File "gt_bin_decode.py", line 81, in main segment_name = segment_name_list[val_index] TypeError: list indices must be integers or slices, not NoneType

    Any idea what could I be? I tried to track the error and I think it could have to do with the .json files, somehow.

    opened by BrunoComCue 2
  • How to get the json format detection files of NuScenes

    How to get the json format detection files of NuScenes

    nuScenes

    1. Preprocessing image

    2. Detection python detection.py --raw_data_folder ${raw_data_dir} --data_folder ${data_dir_2hz} --det_name ${name} --file_path ${file_path} --mode 2hz --velo How can i get the detection file(in code, this default is val.json)

    3. SimpleTrack/docs/nuScenes.md

    python tools/nuscenes_result_creation.py \
        --name SimpleTrack2Hz \
        --name result_folder ${nuscenes_result_dir} \
        --data_folder ${nuscenes2hz_data_dir}
    

    =================>>>>>>(i think)

    python tools/nuscenes_result_creation.py \
        --name SimpleTrack2Hz \
        --result_folder ${nuscenes_result_dir} \
        --data_folder ${nuscenes2hz_data_dir}
    
    1. How do I evaluate NuScenes
    2. Look forward to your help
    opened by liangchunlan 2
  • Where to download ground truth .bin file

    Where to download ground truth .bin file

    Hi, I download the validation_ground_truth_objects_gt.bin file from waymo but return error Traceback (most recent call last): File "gt_bin_decode.py", line 129, in main(args.file_path, out_folder, args.data_folder) File "gt_bin_decode.py", line 81, in main segment_name = segment_name_list[val_index] TypeError: list indices must be integers or slices, not NoneType When I try to decode the ground truth .bin file

    opened by wenyiwangbst 2
Owner
TuSimple
The Future of Trucking
TuSimple
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 4, 2023
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object Tracking with TRansformer.

MOTR: End-to-End Multiple-Object Tracking with TRansformer This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object

null 348 Jan 7, 2023
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

null 18 Sep 16, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Rex Cheng 456 Dec 12, 2022
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 3, 2022
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 5, 2023
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

RIIT Our open-source code for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implement and standard

null 405 Jan 6, 2023
TrackFormer: Multi-Object Tracking with Transformers

TrackFormer: Multi-Object Tracking with Transformers This repository provides the official implementation of the TrackFormer: Multi-Object Tracking wi

Tim Meinhardt 321 Dec 29, 2022
FairMOT - A simple baseline for one-shot multi-object tracking

FairMOT - A simple baseline for one-shot multi-object tracking

Yifu Zhang 3.6k Jan 8, 2023
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese

Aleksandr Kim 276 Dec 30, 2022