ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

Overview

ByteTrack_ReID

ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion information.

Motion information (IoU distance) is efficient and effective in short-term tracking, but can not be used for recovering targets after long-time disappear or conditions with moving camera.

So it is important to enhance ByteTrack with a ReID module for long-term tracking, improving the performance under other challenging conditions, such as moving camera.

Some code is borrowed from FairMOT

For now, the results are trained on half of MOT17 and tested on the other half of MOT17. And the performance is lower than the original performance.

Any issue and suggestions are welcome!

tracking results using tracking strategy of ByteTrack, with detection head and ReID head trained together

tracking results using tracking strategy of FairMOT, with detection head and ReID head trained together

Modifications, TODOs and Performance

Modifications

  • Enhanced ByteTrack with a ReID module (head) following the paradigm of FairMOT.
  • Add a classifier for supervised training of ReID head.
  • Using uncertainty loss in FairMOT for the balance of detection and ReID tasks.
  • Tracking strategy is borrowed from FairMOT

TODOs

  • support more datasets
  • single class –> multiple class
  • other loss functions for better ReID performance
  • other strategies for multiple tasks balance
  • … …

The following contents is original README in ByteTrack.

PWC

PWC

ByteTrack is a simple, fast and strong multi-object tracker.

ByteTrack: Multi-Object Tracking by Associating Every Detection Box

Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Zehuan Yuan, Ping Luo, Wenyu Liu, Xinggang Wang

arXiv 2110.06864

Demo Links

Google Colab demo Huggingface Demo Original Paper: ByteTrack
Open In Colab Hugging Face Spaces arXiv 2110.06864

Abstract

Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtain identities by associating detection boxes whose scores are higher than a threshold. The objects with low detection scores, e.g. occluded objects, are simply thrown away, which brings non-negligible true object missing and fragmented trajectories. To solve this problem, we present a simple, effective and generic association method, tracking by associating every detection box instead of only the high score ones. For the low score detection boxes, we utilize their similarities with tracklets to recover true objects and filter out the background detections. When applied to 9 different state-of-the-art trackers, our method achieves consistent improvement on IDF1 scores ranging from 1 to 10 points. To put forwards the state-of-the-art performance of MOT, we design a simple and strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a single V100 GPU.

Tracking performance

Results on MOT challenge test set

Dataset MOTA IDF1 HOTA MT ML FP FN IDs FPS
MOT17 80.3 77.3 63.1 53.2% 14.5% 25491 83721 2196 29.6
MOT20 77.8 75.2 61.3 69.2% 9.5% 26249 87594 1223 13.7

Visualization results on MOT challenge test set

Installation

1. Installing on the host machine

Step1. Install ByteTrack.

git clone https://github.com/ifzhang/ByteTrack.git
cd ByteTrack
pip3 install -r requirements.txt
python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step3. Others

pip3 install cython_bbox

2. Docker build

docker build -t bytetrack:latest .

# Startup sample
mkdir -p pretrained && \
mkdir -p YOLOX_outputs && \
xhost +local: && \
docker run --gpus all -it --rm \
-v $PWD/pretrained:/workspace/ByteTrack/pretrained \
-v $PWD/datasets:/workspace/ByteTrack/datasets \
-v $PWD/YOLOX_outputs:/workspace/ByteTrack/YOLOX_outputs \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
bytetrack:latest

Data preparation

Download MOT17, MOT20, CrowdHuman, Cityperson, ETHZ and put them under /datasets in the following structure:

datasets
   |——————mot
   |        └——————train
   |        └——————test
   └——————crowdhuman
   |         └——————Crowdhuman_train
   |         └——————Crowdhuman_val
   |         └——————annotation_train.odgt
   |         └——————annotation_val.odgt
   └——————MOT20
   |        └——————train
   |        └——————test
   └——————Cityscapes
   |        └——————images
   |        └——————labels_with_ids
   └——————ETHZ
            └——————eth01
            └——————...
            └——————eth07

Then, you need to turn the datasets to COCO format and mix different training data:

cd <ByteTrack_HOME>
python3 tools/convert_mot17_to_coco.py
python3 tools/convert_mot20_to_coco.py
python3 tools/convert_crowdhuman_to_coco.py
python3 tools/convert_cityperson_to_coco.py
python3 tools/convert_ethz_to_coco.py

Before mixing different datasets, you need to follow the operations in mix_xxx.py to create a data folder and link. Finally, you can mix the training data:

cd <ByteTrack_HOME>
python3 tools/mix_data_ablation.py
python3 tools/mix_data_test_mot17.py
python3 tools/mix_data_test_mot20.py

Model zoo

Ablation model

Train on CrowdHuman and MOT17 half train, evaluate on MOT17 half val

Model MOTA IDF1 IDs FPS
ByteTrack_ablation [google], [baidu(code:eeo8)] 76.6 79.3 159 29.6

MOT17 test model

Train on CrowdHuman, MOT17, Cityperson and ETHZ, evaluate on MOT17 train.

  • Standard models
Model MOTA IDF1 IDs FPS
bytetrack_x_mot17 [google], [baidu(code:ic0i)] 90.0 83.3 422 29.6
bytetrack_l_mot17 [google], [baidu(code:1cml)] 88.7 80.7 460 43.7
bytetrack_m_mot17 [google], [baidu(code:u3m4)] 87.0 80.1 477 54.1
bytetrack_s_mot17 [google], [baidu(code:qflm)] 79.2 74.3 533 64.5
  • Light models
Model MOTA IDF1 IDs Params(M) FLOPs(G)
bytetrack_nano_mot17 [google], [baidu(code:1ub8)] 69.0 66.3 531 0.90 3.99
bytetrack_tiny_mot17 [google], [baidu(code:cr8i)] 77.1 71.5 519 5.03 24.45

MOT20 test model

Train on CrowdHuman and MOT20, evaluate on MOT20 train.

Model MOTA IDF1 IDs FPS
bytetrack_x_mot20 [google], [baidu(code:3apd)] 93.4 89.3 1057 17.5

Training

The COCO pretrained YOLOX model can be downloaded from their model zoo. After downloading the pretrained models, you can put them under /pretrained.

  • Train ablation model (MOT17 half train and CrowdHuman)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT17 test model (MOT17 train, CrowdHuman, Cityperson and ETHZ)
cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_det.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train MOT20 test model (MOT20 train, CrowdHuman)

For MOT20, you need to clip the bounding boxes inside the image.

Add clip operation in line 134-135 in data_augment.py, line 122-125 in mosaicdetection.py, line 217-225 in mosaicdetection.py, line 115-118 in boxes.py.

cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
  • Train custom dataset

First, you need to prepare your dataset in COCO format. You can refer to MOT-to-COCO or CrowdHuman-to-COCO. Then, you need to create a Exp file for your dataset. You can refer to the CrowdHuman training Exp file. Don't forget to modify get_data_loader() and get_eval_loader in your Exp file. Finally, you can train bytetrack on your dataset by running:

cd <ByteTrack_HOME>
python3 tools/train.py -f exps/example/mot/your_exp_file.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth

Tracking

  • Evaluation on MOT17 half val

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse

You can get 76.6 MOTA using our pretrained model.

Run other trackers:

python3 tools/track_sort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_deepsort.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/track_motdt.py -f exps/example/mot/yolox_x_ablation.py -c pretrained/bytetrack_ablation.pth.tar -b 1 -d 1 --fp16 --fuse
  • Test on MOT17

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar -b 1 -d 1 --fp16 --fuse
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 79+ MOTA (For 80+ MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

  • Test on MOT20

We use the input size 1600 x 896 for MOT20-04, MOT20-07 and 1920 x 736 for MOT20-06, MOT20-08. You can edit it in yolox_x_mix_mot20_ch.py

Run ByteTrack:

cd <ByteTrack_HOME>
python3 tools/track.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -c pretrained/bytetrack_x_mot20.pth.tar -b 1 -d 1 --fp16 --fuse --match_thresh 0.7 --mot20
python3 tools/interpolation.py

Submit the txt files to MOTChallenge website and you can get 77+ MOTA (For higher MOTA, you need to carefully tune the test image size and high score detection threshold of each sequence).

Applying BYTE to other trackers

See tutorials.

Combining BYTE with other detectors

Suppose you have already got the detection results 'dets' (x1, y1, x2, y2, score) from other detectors, you can simply pass the detection results to BYTETracker (you need to first modify some post-processing code according to the format of your detection results in byte_tracker.py):

from yolox.tracker.byte_tracker import BYTETracker
tracker = BYTETracker(args)
for image in images:
   dets = detector(image)
   online_targets = tracker.update(dets, info_imgs, img_size)

You can get the tracking results in each frame from 'online_targets'. You can refer to mot_evaluators.py to pass the detection results to BYTETracker.

Demo

cd <ByteTrack_HOME>
python3 tools/demo_track.py video -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --fp16 --fuse --save_result

Deploy

  1. ONNX export and ONNXRuntime
  2. TensorRT in Python
  3. TensorRT in C++
  4. ncnn in C++

Citation

@article{zhang2021bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2110.06864},
  year={2021}
}

@article{zhang2021fairmot,
  title={Fairmot: On the fairness of detection and re-identification in multiple object tracking},
  author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
  journal={International Journal of Computer Vision},
  volume={129},
  pages={3069--3087},
  year={2021},
  publisher={Springer}
}

Acknowledgement

A large part of the code is borrowed from YOLOX, FairMOT, TransTrack and JDE-Cpp. Many thanks for their wonderful works.

Comments
  • The tracked metric is too low

    The tracked metric is too low

    First of all thank you for adding Reid to bytetrack and making him open source,however,there are a few confusions when I reproduce your experimental results. For now,the results are trained on half of MOT17 and tested on the other half of MOT17. I run the following command: python3 tools/train.py -f exps/example/mot/yolox_x_mot17_half.py -d 1 -b 2 --fp16 -o -c pretrained/yolox_x.pth.tar This is the result after 80 rounds of training: Bytetrack-Reid: image Here I also compared the original bytetrack in the mot17_train_half training, and found that the result of Bytetrack_Reid is not as high as the original, may I ask why this is : Bytetrack: image Then I used these two weight files to evaluate on MOT17_val_half and got the following results: Bytrack-Reid: image Bytetrack: image I did not get the same experimental results as you, may I ask where did I go wrong. Looking forward to your reply.

    opened by yanghaibin-cool 14
  • track_id bug with fp16

    track_id bug with fp16

    Hi,here.When targets is converted to FP16, the track_id will lose the precision, resulting in wrong labels for reid. How to separate track_id annotations from variable targets. And set targets to torch.float16as current code, but keep track_id to be torch.float32.I tried to modify it, but it didn't work. Looking forward to your update on this bug.Thank you.

    opened by yanghaibin-cool 7
  • Data Association

    Data Association

    @HanGuangXin
    Hello, sorry to bother you again! When the data is associated, why should the high-scoring detection frame that does not match the trajectory for the first time (motion+reid) and the trajectory that does not match the high-scoring detection frame match again through IoU (motion)? What is the benefit of doing this? Looking forward to your reply.

    opened by yanghaibin-cool 6
  • Training Error

    Training Error

    @HanGuangXin hi htnaks for sharing your codebase i am having issues while training on MOT17 and MOT20 below is the error " Traceback (most recent call last):

    File "tools/train.py", line 122, in args=(exp, args), │ └ Namespace(batch_size=1, ckpt='pretrained/yolox_x.pth', devices=0, dist_backend='nccl', dist_url=None, exp_file='exps/example/... └ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════...

    File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/launch.py", line 90, in launch main_func(*args) │ └ (╒══════════════════╤════════════════════════════════════════════════════════════════════════════════════════════════════════... └ <function main at 0x7fad0f52cae8>

    File "tools/train.py", line 100, in main trainer.train() │ └ <function Trainer.train at 0x7fad0f566ea0> └ <yolox.core.trainer.Trainer object at 0x7fad0f488400>

    File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 77, in train self.train_in_epoch() │ └ <function Trainer.train_in_epoch at 0x7fad0f56ed90> └ <yolox.core.trainer.Trainer object at 0x7fad0f488400> File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 86, in train_in_epoch self.train_in_iter() │ └ <function Trainer.train_in_iter at 0x7fad0f56e840> └ <yolox.core.trainer.Trainer object at 0x7fad0f488400> File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 92, in train_in_iter self.train_one_iter() │ └ <function Trainer.train_one_iter at 0x7fad0f509510> └ <yolox.core.trainer.Trainer object at 0x7fad0f488400> File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/yolox-0.1.0-py3.7-linux-x86_64.egg/yolox/core/trainer.py", line 112, in train_one_iter self.scaler.scale(loss).backward() # loss.backward │ │ │ └ │ │ └ <function GradScaler.scale at 0x7fad245a4b70> │ └ <torch.cuda.amp.grad_scaler.GradScaler object at 0x7fad0f488048> └ <yolox.core.trainer.Trainer object at 0x7fad0f488400> File "/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) │ │ │ │ │ │ │ └ None │ │ │ │ │ │ └ False │ │ │ │ │ └ None │ │ │ │ └ None │ │ │ └ │ │ └ <function backward at 0x7fad2425c0d0> │ └ <module 'torch.autograd' from '/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/autograd/init.py'> └ <module 'torch' from '/home/anaconda3/envs/p37/lib/python3.7/site-packages/torch/init.py'> File "/anaconda3/envs/p37/lib/python3.7/site-packages/torch/autograd/init.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag

    RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1."

    THanks in adavance

    opened by abhigoku10 6
  • Got an error when evaluating and testing the model trained with this code

    Got an error when evaluating and testing the model trained with this code

    Thanks for this work, The training was successful, however I got an error when I tried to test (demo on video) and evaluating (for performance evaluation like MOTA, IDS etc)

    1. The following is an error when I do demo (on video)
    [warning] No nID got!!!
    2022-01-24 10:28:34.520 | INFO     | __main__:main:326 - Model Summary: Params: 104.65M, Gflops: 880.83
    2022-01-24 10:28:34.524 | INFO     | __main__:main:334 - loading checkpoint
    Traceback (most recent call last):
      File "tools/demo_track.py", line 372, in <module>
        main(exp, args)
      File "tools/demo_track.py", line 337, in main
        model.load_state_dict(ckpt["model"])
      File "C:\Users\admin\anaconda3\envs\bytetrack_reid\lib\site-packages\torch\nn\modules\module.py", line 1051, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for YOLOX:
            size mismatch for head.reid_classifier.weight: copying a param with shape torch.Size([40, 128]) from checkpoint, the shape in current model is torch.Size([2, 128]).
            size mismatch for head.reid_classifier.bias: copying a param with shape torch.Size([40]) from checkpoint, the shape in current model is torch.Size([2]).
    
    

    Is there any modification needed in the code. In [40, 128], 40 in the number of id trained in my model.

    1. The following is an error when I do evaluation
    > File "c:\users\admin\desktop\bytetrack_Reid\yolox\core\launch.py", line 90, in launch
        main_func(*args)
        │          └ (╒══════════════════╤══════════════════════════════════════════════════════════════════════════════════════════════╕
        │            │ keys  ...
        └ <function main at 0x000001C75DE9F3A0>
    
      File "tools\track.py", line 220, in main
        *_, summary = evaluator.evaluate(
                      │         └ <function MOTEvaluator.evaluate at 0x000001C75D08C280>
                      └ <yolox.evaluators.mot_evaluator.MOTEvaluator object at 0x000001C714D7D5B0>
    
      File "c:\users\admin\desktop\bytetrack_Reid\yolox\evaluators\mot_evaluator.py", line 137, in evaluate
        frame_id = info_imgs[2].item()
                   └ [tensor([1080]), tensor([1545]), ['MOT20-04/img1/000001.jpg']]
    
    AttributeError: 'list' object has no attribute 'item'
    
    

    Thank you.

    opened by NaifahNurya 6
  • Training Issues

    Training Issues

    @HanGuangXin i was able to train the model in one system, when i shifted the system and try to set it up from beginnign i am facing issues with the data load ing , its not able to read the files from the dataset folder only , if i try to manually add it the self.nID is 0 any idea how to solve it and make the training setps much elaborate

    thanks in advance

    opened by abhigoku10 5
  • Training MOT17 and MOT20 together

    Training MOT17 and MOT20 together

    @HanGuangXin thanks for sharing the codebase i have couple of queries When i am training for mix dataset of MOT17 and MOT20 i am getting an error shoudl i use the same scrip provide by you or should changes be made in the code

    THanks in advance

    opened by abhigoku10 4
  • Pretrained Model

    Pretrained Model

    @HanGuangXin thanks for your work and sharing it to opensource

    1. Can you please share the pretrained model for person reid ?
    2. Is there inference pipeline to check the shared pre trained model
    3. Can we extend this work for vehicle reid also ? if so what all changes have to be made to the current source code

    Thanks in advance

    opened by abhigoku10 3
  • 运行出错

    运行出错

    您好,问个问题,执行python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 3 -b 4 --fp16 -o -c pretrained/yolox_x.pth和ByteTrack运行指令一样,ByteTrack运行正常,在ByteTrack_ReID运行报错,有重新执行setup,请问这个版本的代码是可正常执行的吗? image

    opened by YmanChris 3
  • Training model

    Training model

    I admire your realization. Can you provide the completed hybrid training model with Re-ID? Maybe not mot17_half training model, which is the training model used in mot17 test set. Thank you very much!

    opened by Gaoxinyue423 2
  • reproducing with yolox_s_mot17_half.py

    reproducing with yolox_s_mot17_half.py

    @HanGuangXin Sorry to bother you again! Hello! I have finished training with yolox_s_mot17_half(self.train_ann=train_half.json;self.val_ann=val_half.json).However,I didn't get the desired result(YOLOX_S with mAP 55.5, MOTA 71.8 and IDF1 73.8). Below is the result of my training: Bytetrack-Reid with train.py: image Bytetrack-Reid with track.py: COCO indicator: image MOT indicator: image Bytetrack with train.py: image Bytetrack with track.py: COCO indicator: image MOT indicator: image Why is the tracking effect worse after adding Reid? Where am I going wrong? Looking forward to your reply.

    opened by yanghaibin-cool 2
  • About the trainning with ByteTrack_ReID model

    About the trainning with ByteTrack_ReID model

    Thank you the awesome contributions. Here are some questions about the respository below:

    1. Is it resonable to add ReID branch in YOLOx model? the YOLOx has 3 hierachies which of downsample ratio is 8, 16, 32. From my understanding the larger downsample ratio is, the much uncertainty ID features we got. ~~2. About the nID when trainning crowdhuman datasets. The original Fairmot make the output class equel to the total num ID in datasets which is a large num. The trainning of ReID is hard to control. Furthermore, the performance of using ID features to match is worse than using detect results merely. Can you share some evaluated results.~~
    2. I am trainning the model on MOT20 datasets generated by convert_mot20_to_coco.py. After that, when I started the tranning I meet a error about loss backward. I finally resolved it by incresed the total_id num like that below: total_ids = max(max_id_each_img) + 1 + 1 # TODO Need Check: ids start with 0 Though it can run successfully, it is curious why should increase the id num by 2 instead 1. Besides, The original Fairmot did the same operations.
    3. Not using ID features in demo_track.py. ~~After trainning the model on crowdhuman datasets, I test a video with demo_track.py . After that, I find the tracker is ByteTracker which is not realted to the ID features. So what the right way to test custom video on the trained model with ID features~~ Just replace the Bytetrack with Bytetrack_fairmot and modified some code. but the results i got is not as goog as i expected. I wiil be appreciate if anyone who can help me, Thank you in advance.
    opened by zengjie617789 0
Owner
Han GuangXin
Master student in IIAU lab of DLUT.
Han GuangXin
ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプル

ByteTrack-ONNX-Sample ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプルです。 ONNXに変換したモデルも同梱しています。 変換自体を試したい方はByteT

KazuhitoTakahashi 16 Oct 26, 2022
FairMOT - A simple baseline for one-shot multi-object tracking

FairMOT - A simple baseline for one-shot multi-object tracking

Yifu Zhang 3.6k Jan 8, 2023
A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Pytorch ReID Strong, Small, Friendly A tiny, friendly, strong baseline code for Person-reID (based on pytorch). Strong. It is consistent with the new

Zhedong Zheng 3.5k Jan 8, 2023
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

null 268 Jan 9, 2023
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

null 32 Sep 25, 2021
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou

Sense-GVT 470 Dec 30, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

null 3 Dec 5, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Tracking Pipeline helps you to solve the tracking problem more easily

Tracking_Pipeline Tracking_Pipeline helps you to solve the tracking problem more easily I integrate detection algorithms like: Yolov5, Yolov4, YoloX,

VNOpenAI 32 Dec 21, 2022
Quadruped-command-tracking-controller - Quadruped command tracking controller (flat terrain)

Quadruped command tracking controller (flat terrain) Prepare Install RAISIM link

Yunho Kim 4 Oct 20, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
This repository contains the code and models for the following paper.

DC-ShadowNet Introduction This is an implementation of the following paper DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised

AuAgCu 65 Dec 27, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 8, 2022