ByteTrack: Multi-Object Tracking by Associating Every Detection Box

Overview

ByteTrack

PWC

PWC

ByteTrack is a simple, fast and strong multi-object tracker.

ByteTrack: Multi-Object Tracking by Associating Every Detection Box
Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Zehuan Yuan, Ping Luo, Wenyu Liu, Xinggang Wang
arXiv 2110.06864

Abstract

Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtain identities by associating detection boxes whose scores are higher than a threshold. The objects with low detection scores, e.g. occluded objects, are simply thrown away, which brings non-negligible true object missing and fragmented trajectories. To solve this problem, we present a simple, effective and generic association method, tracking by associating every detection box instead of only the high score ones. For the low score detection boxes, we utilize their similarities with tracklets to recover true objects and filter out the background detections. When applied to 9 different state-of-the-art trackers, our method achieves consistent improvement on IDF1 score ranging from 1 to 10 points.To put forwards the state-of-the-art performance of MOT, we design a simple and strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a single V100 GPU.

Tracking performance

Results on MOT challenge test set

Dataset MOTA IDF1 HOTA MT ML FP FN IDs FPS
MOT17 80.3 77.3 63.1 53.2% 14.5% 25491 83721 2196 29.6
MOT20 77.8 75.2 61.3 69.2% 9.5% 26249 87594 1223 13.7

Visualization results on MOT challenge test set

Installation

Step1. Install ByteTrack.

git clone https://github.com/ifzhang/ByteTrack.git
cd ByteTrack
pip3 install -r requirements.txt
python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step3. Others

pip3 install cython_bbox

Data preparation

Download MOT17, MOT20, CrowdHuman, Cityperson, ETHZ and put them under /datasets in the following structure:

datasets
   |——————mot
   |        └——————train
   |        └——————test
   └——————crowdhuman
   |         └——————Crowdhuman_train
   |         └——————Crowdhuman_val
   |         └——————annotation_train.odgt
   |         └——————annotation_val.odgt
   └——————MOT20
   |        └——————train
   |        └——————test
   └——————Cityscapes
   |        └——————images
   |        └——————labels_with_ids
   └——————ETHZ
            └——————eth01
            └——————...
            └——————eth07

Then, you need to turn the datasets to COCO format and mix different training data:

cd <ByteTrack_HOME>
python3 tools/convert_mot17_to_coco.py
python3 tools/convert_mot20_to_coco.py
python3 tools/convert_crowdhuman_to_coco.py
python3 tools/convert_cityperson_to_coco.py
python3 tools/convert_ethz_to_coco.py

Before mixing different datasets, you need to following the operations in mix_xxx.py to create data folder and link. Finally you can mix the training data:

cd <ByteTrack_HOME>
python3 tools/mix_data_ablation.py
python3 tools/mix_data_test_mot17.py
python3 tools/mix_data_test_mot20.py

Model zoo

Ablatioin model

Train on CrowdHuman and MOT17 half train, evaluate on MOT17 half val

Model MOTA IDF1 IDs FPS
ByteTrack_ablation [google], [baidu(code:eeo8)] 76.6 79.3 159 29.6

MOT17 test model

Train on CrowdHuman, MOT17, Cityperson and ETHZ, evaluate on MOT17 train

Model MOTA IDF1 IDs FPS
bytetrack_x_mot17 [google], [baidu(code:ic0i)] 90.0 83.3 422 29.6
bytetrack_l_mot17 [google], [baidu(code:1cml)] 88.7 80.7 460 43.7
bytetrack_m_mot17 [google], [baidu(code:u3m4)] 87.0 80.1 477 54.1
bytetrack_s_mot17 [google], [baidu(code:qflm)] 79.2 74.3 533 64.5

MOT20 test model

Train on CrowdHuman and MOT20, evaluate on MOT20 train

Model MOTA IDF1 IDs FPS
bytetrack_x_mot20 [google], [baidu(code:3apd)] 93.4 89.3 1057 17.5

Training

The COCO pretrained YOLOX model can be downloaded from their model zoo. After downloading the pretrained models, you can put them under /pretrained.

  • Train ablation model (MOT17 half train and CrowdHuman)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT17 test model (MOT17 train, CrowdHuman, Cityperson and ETHZ)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_det.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT20 test model (MOT20 train, CrowdHuman)

For MOT20, you need to clip the bounding boxes inside the image.

Add clip operation in line 134-135 in data_augment.py, line 122-125 in mosaicdetection.py, line 217-225 in mosaicdetection.py, line 115-118 in boxes.py.

cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth

Tracking

  • Evaluation on MOT17 half val

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse

You can get 76.6 MOTA using our pretrained model.

Run other trackers:

python3 tools/track_sort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_deepsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_motdt.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
  • Test on MOT17

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 79+ MOTA (For 80+ MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

  • Test on MOT20

We use the input size 1600 x 896 for MOT20-04, MOT20-07 and 1920 x 736 for MOT20-06, MOT20-08. You can edit it in yolox_x_mix_mot20_ch.py

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -c pretrained/bytetrack_x_mot20.pth.tar -b 1 -d 1 --fp16 --fuse --match_thresh 0.7 --mot20
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 77+ MOTA (For higher MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

Applying BYTE to other trackers

See tutorials.

Demo

cd <ByteTrack_HOME>
python3 tools/demo_track.py video -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --fp16 --fuse --save_result

Deploy

  1. ONNX export and ONNXRuntime
  2. TensorRT in Python
  3. TensorRT in C++
  4. ncnn in C++

Citation

@article{zhang2021bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2110.06864},
  year={2021}
}

Acknowledgement

A large part of the code is borrowed from YOLOX, FairMOT, TransTrack and JDE-Cpp. Many thanks for their wonderful works.

Comments
  • Use my own detection model

    Use my own detection model

    Hi Thanks for sharing this project. If I want to use my own detection model with your tracker, what/where is the main entry point for me to adapt your code in order to replace the yolox detection model? Do I need retrain everything or can I inject the detections into your pretrained model for inference? Thanks

    opened by Tetsujinfr 20
  • Deleting known tracks for new videos

    Deleting known tracks for new videos

    I am creating a new BYTETracker object for each video, but the tracking_ids seem to be increasing over videos. I suppose there is some sort of module level cache that I need to delete, so the track ids start from 0 for each video?

    opened by dumbPy 8
  • The result of completing the 80 epoch is very poor and cannot be duplicated, and ask for your help.

    The result of completing the 80 epoch is very poor and cannot be duplicated, and ask for your help.

    I use 2 rtx titan 24G, batch_size 8 to train, Namespace(batch_size=8, ckpt='YOLOX_outputs/yolox_x_mix_det/last_epoch_ckpt.pth.tar', devices=2, dist_backend='nccl', dist_url=None, exp_file='exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fp16=True, local_rank=0, machine_rank=0, name=None, num_machines=1, occupy=True, opts=[], resume=True, start_epoch=18) ╒══════════════════╤═══════════════════╕ │ keys │ values │ ╞══════════════════╪═══════════════════╡ │ seed │ None │ ├──────────────────┼───────────────────┤ │ output_dir │ './YOLOX_outputs' │ ├──────────────────┼───────────────────┤ │ print_interval │ 20 │ ├──────────────────┼───────────────────┤ │ eval_interval │ 5 │ ├──────────────────┼───────────────────┤ │ num_classes │ 1 │ ├──────────────────┼───────────────────┤ │ depth │ 1.33 │ ├──────────────────┼───────────────────┤ │ width │ 1.25 │ ├──────────────────┼───────────────────┤ │ data_num_workers │ 4 │ ├──────────────────┼───────────────────┤ │ input_size │ (800, 1440) │ ├──────────────────┼───────────────────┤ │ random_size │ (18, 32) │ ├──────────────────┼───────────────────┤ │ train_ann │ 'train.json' │ ├──────────────────┼───────────────────┤ │ val_ann │ 'val_half.json' │ ├──────────────────┼───────────────────┤ │ degrees │ 10.0 │ ├──────────────────┼───────────────────┤ │ translate │ 0.1 │ ├──────────────────┼───────────────────┤ │ scale │ (0.1, 2) │ ├──────────────────┼───────────────────┤ │ mscale │ (0.8, 1.6) │ ├──────────────────┼───────────────────┤ │ shear │ 2.0 │ ├──────────────────┼───────────────────┤ │ perspective │ 0.0 │ ├──────────────────┼───────────────────┤ │ enable_mixup │ True │ ├──────────────────┼───────────────────┤ │ warmup_epochs │ 1 │ ├──────────────────┼───────────────────┤ │ max_epoch │ 80 │ ├──────────────────┼───────────────────┤ │ warmup_lr │ 0 │ ├──────────────────┼───────────────────┤ │ basic_lr_per_img │ 1.5625e-05 │ ├──────────────────┼───────────────────┤ │ scheduler │ 'yoloxwarmcos' │ ├──────────────────┼───────────────────┤ │ no_aug_epochs │ 10 │ ├──────────────────┼───────────────────┤ │ min_lr_ratio │ 0.05 │ ├──────────────────┼───────────────────┤ │ ema │ True │ ├──────────────────┼───────────────────┤ │ weight_decay │ 0.0005 │ ├──────────────────┼───────────────────┤ │ momentum │ 0.9 │ ├──────────────────┼───────────────────┤ │ exp_name │ 'yolox_x_mix_det' │ ├──────────────────┼───────────────────┤ │ test_size │ (800, 1440) │ ├──────────────────┼───────────────────┤ │ test_conf │ 0.001 │ ├──────────────────┼───────────────────┤ │ nmsthre │ 0.7 │ ├──────────────────┼───────────────────┤ │ data_name │ 'mix_det' │ ├──────────────────┼───────────────────┤ │ name │ '' │ ├──────────────────┼───────────────────┤ │ val_name │ 'train' │ ╘══════════════════╧═══════════════════╛ after finished 80 epochs, image i get bad results compared with the results in the training set in the paper, so I would like to ask for your help on what is wrong. paper: image my results: QQ截图20211125153558

    opened by QiyuLuo 7
  • track_id bug with FP16

    track_id bug with FP16

    Hi, I found an underlying bug when the model is trained with FP16. In yolox/core/trainer.py, when we get targets with shape [batchsize, 1000, class_id + tlwh + track_id], the track_id is correct. But when targets is converted to FP16, the track_id will lose the precision, resulting in wrong labels for reid. And seriously I think it is not easy to find the bug.

    Although this bug will not effect the ByteTrack performance, which just uses detection annotations, but it will severely harm the ReID performance when trying to combine ByteTrack with ReID module in JDE paradigm.

    I can make a PR if you think it is needed :)

    opened by HanGuangXin 6
  • ByteTrack Deployment on Deepstream 6.0+ Major Memory Leak

    ByteTrack Deployment on Deepstream 6.0+ Major Memory Leak

    Hello,

    I am currently testing the deployment guide of bytetrack in deepstream on a jetson AGX xavier with 8 camera feeds, and it works great apart from the major memory leak. The program uses the entire device memory within 35 minutes of running. If i switch back to nvidia's default trackers no leak occurs.

    I have gone through the numerous issues in this repo that attempt to fix the memory leak, such as:

    https://github.com/ifzhang/ByteTrack/issues/253

    https://github.com/ifzhang/ByteTrack/pull/252

    https://github.com/ifzhang/ByteTrack/pull/249

    https://github.com/ifzhang/ByteTrack/pull/158/files

    Unfortunately none of the above 'solutions' fix the memory leak. I have ran the deepstream pipeline with valgrind and can verify a leak is occurring, but valgrind does not show exactly where the leak is taking place, it just shows that a pointer is continuously being allocated somewhere in the heap.

    I am actively trying to fix this issue, but since i have relatively no experience with the mechanics of bytetrack, i can only do so much besides poking and proding around the codebase until a solution presents itself to me.

    Im hoping @ifzhang, @chirag4798 or @callmesora might be able to reply to this issue with some assistance and or insight, otherwise if you are reading this you should just treat this issue as a warning to not deploy or use the deepstream version of bytetrack until this issue has been marked resolved by me or one of the above authors.

    opened by EmpireofKings 5
  • Tracking disappear when detect vehicles.

    Tracking disappear when detect vehicles.

    I have a problem is about that using ByteTrack in Multiple Class Multiple Object Tracking, I am using my trained yolov5 as a backbone detector to detect objects on road, including vehicle, bicycle, person, and so on. My detector can detect objects perfectly, But some bounding box of Vehicle will disappear after match stage, but they all have high scores from detection stage, and i tried to modify the sort of threshold but still got the bad result. Actually i am encounter same problem in FairMOT, except DeepSort. I am searching which step make this happen in match stage, have you any suggestion about that, expect your reply, thanks.

    opened by leisurecodog 5
  • No appearance embedding is used?

    No appearance embedding is used?

    Hi, according to this code example: https://github.com/ifzhang/ByteTrack#combining-byte-with-other-detectors There is no appearance embedding input for the tracker. Could you confirm that your tracking algo is better than DeepSORT and TMOT (https://github.com/Zhongdao/Towards-Realtime-MOT) that use appearance embedding without using it to compute similarities between tracklets? I have not check the paper yet just looking for a quick answer. Many thanks! :)

    opened by JunweiLiang 5
  • Pipeline is very slow

    Pipeline is very slow

    check my log.txt file attached , got 0.9 FPS , 11 seconds demo video (nearly 300 frames) may take around 5-7 minutes to fininsh 2021-11-01 14:01:50.434 | INFO | main:main:290 - Args: Namespace(camid=0, ckpt='pretrained/bytetrack_x_mot17.pth.tar', conf=None, demo='video', device='gpu', exp_file='exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fp16=True, fuse=True, match_thresh=0.8, min_box_area=10, mot20=False, name=None, nms=None, path='./videos/palace.mp4', save_result=True, track_buffer=30, track_thresh=0.5, trt=False, tsize=None) /home/mossad/projects/ByteTrack/venv/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) 2021-11-01 14:01:51.169 | INFO | main:main:300 - Model Summary: Params: 99.00M, Gflops: 791.73 2021-11-01 14:01:53.548 | INFO | main:main:311 - loading checkpoint 2021-11-01 14:02:00.999 | INFO | main:main:315 - loaded checkpoint done. 2021-11-01 14:02:00.999 | INFO | main:main:318 - Fusing model... /home/mossad/projects/ByteTrack/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:561: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information. if param.grad is not None: 2021-11-01 14:02:01.587 | INFO | main:imageflow_demo:236 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2021_11_01_14_02_01/palace.mp4 2021-11-01 14:02:01.589 | INFO | main:imageflow_demo:246 - Processing frame 0 (100000.00 fps) 2021-11-01 14:02:24.684 | INFO | main:imageflow_demo:246 - Processing frame 20 (0.92 fps) 2021-11-01 14:02:47.751 | INFO | main:imageflow_demo:246 - Processing frame 40 (0.92 fps) 2021-11-01 14:03:11.332 | INFO | main:imageflow_demo:246 - Processing frame 60 (0.91 fps) 2021-11-01 14:03:35.619 | INFO | main:imageflow_demo:246 - Processing frame 80 (0.90 fps) 2021-11-01 14:03:58.876 | INFO | main:imageflow_demo:246 - Processing frame 100 (0.90 fps) 2021-11-01 14:04:21.899 | INFO | main:imageflow_demo:246 - Processing frame 120 (0.90 fps) 2021-11-01 14:04:44.870 | INFO | main:imageflow_demo:246 - Processing frame 140 (0.91 fps)

    opened by Mohamed209 5
  • [Demo] No bounding box displayed from demo_track.py

    [Demo] No bounding box displayed from demo_track.py

    I've followed all the steps mentioned in the README.md of this repo. However, when I tried out the demo using the demo_track.py on my machine, no bounding box was shown. The picture below is the snip from the output in the YOLOX_outputs folder.

    image

    I have used the pre-trained models (particularly the bytetrack_x_mot17 and bytetrack_nano_mot17 models) provided in this repo, and it still gives me the same result.

    When I tried out the Google Colab demo, even with my own model, there were bounding boxes shown in the output.

    What seems to be the issue here? Could it be the dependencies issue? Should I reinstall all the dependencies from the start?

    opened by CYJGoh 3
  • What is the reason for calculating covariance with mean[3] = height in Kalman filter?

    What is the reason for calculating covariance with mean[3] = height in Kalman filter?

    https://github.com/ifzhang/ByteTrack/blob/8d47ea56f4523276eb46386625471afcb11f09d4/yolox/tracker/kalman_filter.py#L107-L116

    Why is there a special reason why I took it as a height when modeling?

    opened by youngjae-avikus 3
  • Change BaseTrack attributes to Object attributes

    Change BaseTrack attributes to Object attributes

    All the attributes are used after object initialisation only, so there is no reason to keep them as Class attributes.
    Keeping them as class attributes instead causes #79

    This should fix #79 and should not affect any other part.
    Have checked that we always call self.next_id() and never call BaseTrack.next_id() i.e., no need for it to be a static method anymore, since all the attributes are now object attributes.

    Fixes #79

    opened by dumbPy 3
  • Docker-Error: could not select device driver with GPU capabilities

    Docker-Error: could not select device driver with GPU capabilities

    "docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]."

    I got the above error when running the below mentioned command

    docker run --gpus all -it --rm \ -v $PWD/pretrained:/workspace/ByteTrack/pretrained \ -v $PWD/datasets:/workspace/ByteTrack/datasets \ -v $PWD/YOLOX_outputs:/workspace/ByteTrack/YOLOX_outputs \ -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \ --device /dev/video0:/dev/video0:mwr \ --net=host \ -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \ -e DISPLAY=$DISPLAY \ --privileged \ bytetrack:latest

    Please solve this issue

    Here are my device specs

    image

    opened by Bumpeet 0
  • Is there an implementation of ByteTrack with ReID?

    Is there an implementation of ByteTrack with ReID?

    Looking at https://github.com/ifzhang/ByteTrack/blob/main/yolox/tracker/byte_tracker.py there appears to be no Reid used at all - neither for reactivating tracks or in the association function. Is there an implementation somewhere that has these Reid capabilities injected into the ByteTracker?

    opened by levinwil 0
  • Keep track of detection index

    Keep track of detection index

    Addresses #195 and unanswered question in #5: Currently, after updating ByteTrack object, some of the bounding boxes will be dropped, and it's not possible to figure out the original bounding box index form the tracker. For example

    boxes, scores, classes, .... = some_detection_model.inference(frame)
    boxes_scores = np.concatenate([boxes, scores.reshape(-1,1), ], axis=1)
    # In this step len(online_targets) <= len(boxes)
    online_targets = tracker.update(boxes_scores, [frame.shape[0], frame.shape[1]], [frame.shape[0], frame.shape[1]])
    # Since we are missing some of the boxes, there is no way figuring out relevant information of the boxes like class.
    # I did bbox matching method to figure out the original index, but that's inefficient (N^2) 
    

    With the solution in this PR you can track index directly using tracker.update(..., track_det_idx=True):

    boxes, scores, classes, .... = some_detection_model.inference(image)
    det_idxs_orig = np.array(list(range(post_boxes.shape[0]))).reshape(-1,1)
    boxes_scores = np.concatenate([boxes, scores.reshape(-1,1), det_idxs_orig], axis=1)
    online_targets = tracker.update(boxes_scores, [frame.shape[0], frame.shape[1]], [frame.shape[0], frame.shape[1]], track_det_idx=True)
    
    for track_i, an_online_target in enumerate(online_targets):
                track_xyxy = utils.tlwh_to_xyxy(an_online_target.tlwh)
                det_i = an_online_target.det_idx
                tracked_obj_class = classes[int(det_i)]
    
    opened by abaybektursun 0
  • How to fine-tune one of the pre-trained models?

    How to fine-tune one of the pre-trained models?

    Hi thanks for the excellent work Can you please tell me how I can fine-tune the byte track on the custom dataset? The instructions given in the repository are training the yolox from scratch using pre-trained COCO. How can I fine-tune one of the pre-trained models?

    opened by danial880 0
  • Running ByteTrack seperately from the object-detector

    Running ByteTrack seperately from the object-detector

    Hi,

    I want to run a small Yolo object model on one device that cannot run both yolo and tracking (memory restrictions), and run the Bytetrack tracker seperately on another device. Is this possible? Thanks!

    opened by PiDGMT 1
  • No such file or directory

    No such file or directory

    FileNotFoundError: [Errno 2] No such file or directory: 'datasets/crowdhuman/CrowdHuman_val/273271,c9db000d5146c15.jpg, is there no c9db000d5146c15.jpg in CrowdHuman_val? l have alreadly cheek it

    opened by gyh420 2
Owner
Yifu Zhang
Master student of HUST, intern of MSRA, intern of ByteDance
Yifu Zhang
ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

ByteTrack_ReID ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion infor

Han GuangXin 46 Dec 29, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
AOT (Associating Objects with Transformers) in PyTorch

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch

null 162 Dec 14, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction TSDF++ is a novel multi-object TSDF formulation that can encode mult

ETHZ ASL 130 Dec 29, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 4, 2023
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstrac

null 2 Apr 14, 2022
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022