Real-time multi-object tracker using YOLO v5 and deep sort

Overview

Yolov5 + Deep Sort with PyTorch


CI CPU testing
Open In Colab

Introduction

This repository contains a two-stage-tracker. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algorithm which tracks the objects. It can track any object that your Yolov5 model was trained to detect.

Tutorials

Before you run the tracker

  1. Clone the repository recursively:

git clone --recurse-submodules https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch.git

If you already cloned and forgot to use --recurse-submodules you can run git submodule update --init

  1. Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:

pip install -r requirements.txt

Tracking sources

Tracking can be run on most video formats

python3 track.py --source ... --show-vid  # show live inference results as well
  • Video: --source file.mp4
  • Webcam: --source 0
  • RTSP stream: --source rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa
  • HTTP stream: --source http://wmccpinetop.axiscam.net/mjpg/video.mjpg

Select a Yolov5 family model

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

python3 track.py --source 0 --yolo_weights yolov5s.pt --img 640  # smallest yolov5 family model
python3 track.py --source 0 --yolo_weights yolov5x6.pt --img 1280  # largest yolov5 family model

Filter tracked classes

By default the tracker tracks all MS COCO classes.

If you only want to track persons I recommend you to get these weights for increased performance

python3 track.py --source 0 --yolo_weights yolov5/weights/crowdhuman_yolov5m.pt --classes 0  # tracks persons, only

If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag

python3 track.py --source 0 --yolo_weights yolov5s.pt --classes 16 17  # tracks cats and dogs, only

Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.

MOT compliant results

Can be saved to inference/output by

python3 track.py --source ... --save-txt

Cite

If you find this project useful in your research, please consider cite:

@misc{yolov5deepsort2020,
    title={Real-time multi-object tracker using YOLOv5 and deep sort},
    author={Mikel Broström},
    howpublished = {\url{https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch}},
    year={2020}
}
Comments
  • How to increase DeepSort speed on embedded device?

    How to increase DeepSort speed on embedded device?

    I am trying to implement this algo on an embedded system and speed of the deepsort component is much slower compared to yolo. Is it possible to to run deepsort at regular intervals instead of every frame?

    question Stale 
    opened by HeChengHui 26
  • MOT16_eval

    MOT16_eval

    Thank you very much for your reply. I would like to know how to run the folder of multi-objective evaluation indicators to get MOT16 evaluation results (such as MOTA).

    opened by fengshuaibo 26
  • I used two different model weights, and I ran the eval.sh file. I got the same two evaluations

    I used two different model weights, and I ran the eval.sh file. I got the same two evaluations

    Evaluating ch_yolov5m_deep_sort

    MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-02)     0.3986 sec
    MotChallenge2DBox.get_preprocessed_seq_data(pedestrian)                0.4142 sec
    CLEAR.eval_sequence()                                                  0.1181 sec
    Identity.eval_sequence()                                               0.0189 sec
    Count.eval_sequence()                                                  0.0000 sec
    

    1 eval_sequence(MOT16-02, ch_yolov5m_deep_sort) 0.9544 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-04) 1.4155 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.8502 sec CLEAR.eval_sequence() 0.2638 sec Identity.eval_sequence() 0.0354 sec Count.eval_sequence() 0.0000 sec 2 eval_sequence(MOT16-04, ch_yolov5m_deep_sort) 2.5745 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-05) 0.1946 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.4567 sec CLEAR.eval_sequence() 0.1318 sec Identity.eval_sequence() 0.0214 sec Count.eval_sequence() 0.0000 sec 3 eval_sequence(MOT16-05, ch_yolov5m_deep_sort) 0.8120 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-09) 0.1831 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.3084 sec CLEAR.eval_sequence() 0.0786 sec Identity.eval_sequence() 0.0093 sec Count.eval_sequence() 0.0000 sec 4 eval_sequence(MOT16-09, ch_yolov5m_deep_sort) 0.5832 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-10) 0.2463 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.3749 sec CLEAR.eval_sequence() 0.1235 sec Identity.eval_sequence() 0.0108 sec Count.eval_sequence() 0.0000 sec 5 eval_sequence(MOT16-10, ch_yolov5m_deep_sort) 0.7594 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-11) 0.2213 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.4973 sec CLEAR.eval_sequence() 0.1329 sec Identity.eval_sequence() 0.0162 sec Count.eval_sequence() 0.0000 sec 6 eval_sequence(MOT16-11, ch_yolov5m_deep_sort) 0.8738 sec MotChallenge2DBox.get_raw_seq_data(ch_yolov5m_deep_sort, MOT16-13) 0.2492 sec MotChallenge2DBox.get_preprocessed_seq_data(pedestrian) 0.4239 sec CLEAR.eval_sequence() 0.1154 sec Identity.eval_sequence() 0.0150 sec Count.eval_sequence() 0.0000 sec 7 eval_sequence(MOT16-13, ch_yolov5m_deep_sort) 0.8082 sec

    All sequences for ch_yolov5m_deep_sort finished in 7.37 seconds

    CLEAR: ch_yolov5m_deep_sort-pedestrianMOTA MOTP MODA CLR_Re CLR_Pr MTR PTR MLR sMOTA CLR_TP CLR_FN CLR_FP IDSW MT PT ML Frag
    MOT16-02 40.677 91.743 40.778 41.317 98.714 20.37 40.741 38.889 37.266 7368 10465 96 18 11 22 21 19
    MOT16-04 65.656 90.874 65.685 65.797 99.831 34.94 36.145 28.916 59.651 31291 16266 53 14 29 30 24 18
    MOT16-05 55.471 85.627 55.749 62.027 90.81 27.2 49.6 23.2 46.556 4229 2589 428 19 34 62 29 30
    MOT16-09 74.035 89.755 74.13 76.622 96.85 56 40 4 66.185 4028 1229 131 5 14 10 1 6
    MOT16-10 62.088 85.56 62.34 66.935 93.576 40.741 46.296 12.963 52.422 8245 4073 566 31 22 25 7 78
    MOT16-11 64.214 92.027 64.312 66.961 96.195 27.536 49.275 23.188 58.876 6143 3031 243 9 19 34 16 14
    MOT16-13 51.729 87.662 51.956 53.878 96.557 28.972 40.187 30.841 45.082 6169 5281 220 26 31 43 33 34
    COMBINED 59.429 89.735 59.54 61.113 97.49 30.948 43.714 25.338 53.156 67473 42934 1737 122 160 226 131 199

    Identity: ch_yolov5m_deep_sort-pedestrianIDF1 IDR IDP IDTP IDFN IDFP
    MOT16-02 50.045 35.496 84.807 6330 11503 1134
    MOT16-04 75.348 62.504 94.835 29725 17832 1619
    MOT16-05 63.808 53.696 78.613 3661 3157 996
    MOT16-09 77.889 69.755 88.17 3667 1590 492
    MOT16-10 69.052 59.222 82.794 7295 5023 1516
    MOT16-11 70.334 59.647 85.687 5472 3702 914
    MOT16-13 58.972 45.939 82.329 5260 6190 1129
    COMBINED 68.379 55.621 88.73 61410 48997 7800

    Count: ch_yolov5m_deep_sort-pedestrianDets GT_Dets IDs GT_IDs
    MOT16-02 7464 17833 47 54
    MOT16-04 31344 47557 72 83
    MOT16-05 4657 6818 76 125
    MOT16-09 4159 5257 22 25
    MOT16-10 8811 12318 58 54
    MOT16-11 6386 9174 56 69
    MOT16-13 6389 11450 73 107
    COMBINED 69210 110407 404 517

    Timing analysis: MotChallenge2DBox.get_raw_seq_data 2.9086 sec MotChallenge2DBox.get_preprocessed_seq_data 3.3256 sec CLEAR.eval_sequence 0.9641 sec Identity.eval_sequence 0.1270 sec Count.eval_sequence 0.0000 sec eval_sequence 7.3654 sec Evaluator.evaluate 7.3683 sec

    opened by liang-jingyi 22
  • Fixes to kalman filter and implementation for adaptive Q and R noise covariance estimation

    Fixes to kalman filter and implementation for adaptive Q and R noise covariance estimation

    Reopening since it seemed to get some attention. Rebased to latest master, I do not know of any other changes to the repo. Please let me know.

    Fixed noise covariance matrices so they are not varied based on bounding box location. Fixed delta time of predictions from constant 1 second to varied based on the frequency of predictions, this should increase performance.

    Implemented adaptive kalman filter for Q and R estimation, based on this article.

    In my experiments, I found that the additions I made gave better results in a practical scenario. When tracking something, you usually want to take into account the delta time between kalman updates. Also, I made it not necessary any more to have to tune the filter to find the optimal Q and R noise matrix parameters, should hopefully give better results in the end.

    opened by henriksod 20
  • Deepsort tracking almost uses the entire CPU memory

    Deepsort tracking almost uses the entire CPU memory

    Hey a clarification! while running detection over a video, I see that my entire CPU memory is being used. I'm not able to run it on multiple threads as it leads to slowness. Did anyone face this issue ? Any help would be appreciated

    bug help wanted 
    opened by fareed945 20
  • Appearance cost has no effect

    Appearance cost has no effect

    in deepsort tracker

    # Now Compute the Appearance-based Cost Matrix app_cost = self.metric.distance( np.array([dets[i].feature for i in detection_indices]), np.array([tracks[i].track_id for i in track_indices]), )

    Why is line 121 fetch the track_id instead of the feature of the track?

    It seems to be wrong since the appearence cost always return way higher than the threshold Edit: The problem is not in the track_id, but in the distance function. See below comments

    bug 
    opened by mpandoko 19
  • Numbers skip frequently and don't follow an order.

    Numbers skip frequently and don't follow an order.

    Thank you for your great work. I have a question--why the id are assigned without any order? I think some object is detected and given an ID, but it is not tracked, so some IDs are not displayed. I am not sure why this is happening. Any help is much appreciated.Thanks.

    opened by tiancola 19
  • WINDOWS: No URL associated to the chosen DeepSort weights.

    WINDOWS: No URL associated to the chosen DeepSort weights.

    Search before asking

    • [X] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    Greetings. I encountered a small error (rather, my knowledge is not enough:( ). When starting the program, the following error occurs: "No URL associated to the chosen DeepSort weights. Choose between:" How to choose? Where? Why and why? Thanks a lot in advance!

    question 
    opened by M-Key4151 18
  • How to evaluate on custom tracking dataset?

    How to evaluate on custom tracking dataset?

    Search before asking

    • [X] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    Hi,

    I have a YOLOv5 model, a video and its ground truth. I would like to evaluate YOLOv5 StrongSORT on this video but I do not know how to do it? Is there any tutorial, or someone can explain it to me?

    Thank you in advance.

    enhancement question 
    opened by dariogonle 17
  • strong_sort weight file read problem

    strong_sort weight file read problem

    Search before asking

    • [X] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    I was unable to download the strong_sort weights file online, so I chose to download the weights manually from the model zoo. However, after the downloaded weight is placed in the corresponding folder, the code cannot be read, and the online download is still performed. Is it due to the weight format being .pth? How to solve it?

    question 
    opened by jklkid 17
  • How to eval on MOT16

    How to eval on MOT16

         IDF1 IDP  IDR Rcll Prcn GT MT PT ML FP   FN IDs  FM MOTA MOTP IDt IDa IDm
    

    MOT16-09 0.0% NaN 0.0% 0.0% NaN 25 0 0 25 0 5257 0 0 0.0% NaN 0 0 0 OVERALL 0.0% NaN 0.0% 0.0% NaN 25 0 0 25 0 5257 0 0 0.0% NaN 0 0 0 I write the code by yolov3_deepsort,that code can run the results.but this get this results.Do you run this code on MOT16 ?

    opened by TaylorWillber 17
  • Output track.py different from detect.py yolov5 runs

    Output track.py different from detect.py yolov5 runs

    Search before asking

    • [x] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    Hello,

    I want to detect floating plastics on the water surface. The detection for only yolov5 runs goes quite well. The problem is that I want to count the objects passing and remove the duplicate counts. When running the track.py script, there a many detections missing that were detected in the detect.py script of the regular yolov5:

    image

    Any idea what might cause this difference & how to fix it?

    question 
    opened by jurvanwijk 50
  • Tracker losing tracked object after collision

    Tracker losing tracked object after collision

    Search before asking

    • [X] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    Hello! I need help understanding why StrongSORT losing an object. Any suggestion would be appreciated. Thank you!

    image

    1. Reassign tracked object to a newly detected object with the current object_d

    image 2. Losing tracks after collision with another object

    1. Although, I have tried to hide behind the wall for a couple of seconds. And after every appearance on the camera tracker assign a new object_id. So person re-id not working well. (I was facing the camera before hiding the wall and appear facing the camera).
    question 
    opened by aqua1907 2
  • Adding Bot sort Tracker

    Adding Bot sort Tracker

    As discussed in the previous PR, BoT-SORT tracker is implemented using the ReID architectures used in the repo. Also have made some changes in the README.md file for the installation of cython_bbox.

    Demo video:

    https://user-images.githubusercontent.com/82194525/209459332-e1c74fca-25b6-4b1f-8cdb-be8338b40777.mp4

    opened by Mohit-robo 20
  • Save segments in save_text output

    Save segments in save_text output

    Search before asking

    • [X] I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

    Question

    Hello, I found this repo super helpful and very straight forward to use! Thank you for this amazing work on this repo!

    My Question:

    I want to get the segmentation masks of the tracked object in the output text file. but, after examining your code repo for a while I came to the conclusion that the track.py file draws and populates save_text file using the boxes from the tracker output. and draws the masks from the yolo model output: lines to draw mask: here.

    # Mask plotting
    annotator.masks(
        masks,
        colors=[colors(x, True) for x in det[:, 5]],
        im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() /
        255 if retina_masks else im[i]
    )
    

    lines to draw boxes: here.

    if save_txt:
        # to MOT format
        bbox_left = output[0]
        bbox_top = output[1]
        bbox_w = output[2] - output[0]
        bbox_h = output[3] - output[1]
        # Write MOT compliant results to file
        with open(txt_path + '.txt', 'a') as f:
            f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left,  # MOT format
                                           bbox_top, bbox_w, bbox_h, -1, -1, -1, i))
    
    if save_vid or save_crop or show_vid:  # Add bbox to image
        c = int(cls)  # integer class
        id = int(id)  # integer id
        label = None if hide_labels else (f'{id} {names[c]}' if hide_conf else \
            (f'{id} {conf:.2f}' if hide_class else f'{id} {names[c]} {conf:.2f}'))
        color = colors(c, True)
        annotator.box_label(bboxes, label, color=color)
    
        if save_trajectories and tracking_method == 'strongsort':
            q = output[7]
            tracker_list[i].trajectory(im0, q, color=color)
        if save_crop:
            txt_file_name = txt_file_name if (isinstance(path, list) and len(path) > 1) else ''
            save_one_box(bboxes, imc, file=save_dir / 'crops' / txt_file_name / names[c] / f'{id}' / f'{p.stem}.jpg', BGR=True)
    

    If my understanding is correct the output of the tracker is used to:

    1. draw bboxes. and
    2. save text file when --save_text is added.

    While the yolo model output is used to draw masks...

    since my goal is to get an output text file that contains the segments of the tracked objects I thought of maybe trying to match the yolo segments to the trackers output, and then write the matched results to an output text file.

    but I've also came to the conclusion that not all object from the detector has to be tracked and for a single video frame, all detected objects will not necessarily have an associated tracking output. (kindly correct me if I'm wrong about this)

    do you have any suggestion for me to output a text file that contains segments of tracked object?

    thanks in advance!

    enhancement question 
    opened by Abd-elr4hman 3
  • Strong-OCSort

    Strong-OCSort

    I have implemented Strong-OCSort, which is a combination between StrongSort and OCSort.

    StrongOCSort performs association in 3 steps:

    For all detections with confidence above a detection threshold: 1. Associate using feature matching (StrongSort) 2. Associate using trajectory matching (OCSort)

    (optional) For all detections with confidence below a detection threshold: 3. Associate using byte association

    I have also implemented a resurrection system. This is my attempt to cache features of tracks that have "died". In case a detection with a similar feature shows up again in a sequence (lady with a red shirt in MOT17-05 for example), it should create a track of the same ID as the one that previously "died". This system is not solving this issue at the moment, but I left the code in together with a parameter to toggle the system on and off (default off).

    Eval results (can be tuned for better results), based on detector weights crowdhuman_yolov5m.pt

    HOTA: exp15-pedestrian             HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT17-04-FRCNN                     60.825    59.138    63.116    64.129    76.426    67.275    80.735    81.156    63.564    80.714    75.474    60.918
    MOT17-05-FRCNN                     39.572    38.377    40.885    40.461    77.312    50.886    65.91     82.316    40.667    50.308    78.129    39.305
    MOT17-09-FRCNN                     57.6      61.734    53.789    66.385    82.209    57.95     82.977    86.121    59.744    70.633    82.976    58.609
    MOT17-10-FRCNN                     50.489    50.549    50.58     53.689    76.855    54.478    79.142    81.023    52.106    66.309    76.676    50.843
    MOT17-11-FRCNN                     63.41     60.258    66.942    70.522    75.302    73.393    83.735    86.99     68.697    75.341    83.895    63.207
    MOT17-13-FRCNN                     46.94     42.853    51.795    46.001    74.34     56.438    77.363    80.765    48.761    60.994    75.77     46.216
    COMBINED                           56.889    54.669    59.764    59.472    76.522    64.677    80.779    82.155    59.525    73.537    77.147    56.731
    
    CLEAR: exp15-pedestrian            MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag
    MOT17-04-FRCNN                     70.095    78.644    70.263    77.086    91.868    50.602    36.145    13.253    53.632    36660     10897     3245      80        42        30        11        441
    MOT17-05-FRCNN                     45.829    79.61     46.697    49.516    94.613    16.541    57.895    25.564    35.733    3425      3492      195       60        22        77        34        168
    MOT17-09-FRCNN                     69.164    84.578    69.746    75.249    93.186    46.154    53.846    0         57.559    4007      1318      293       31        12        14        0         67
    MOT17-10-FRCNN                     63.066    77.822    63.595    66.726    95.518    33.333    56.14     10.526    48.267    8567      4272      402       68        19        32        6         558
    MOT17-11-FRCNN                     65.112    85.561    65.441    79.546    84.938    45.333    40        14.667    53.627    7506      1930      1331      31        34        30        11        104
    MOT17-13-FRCNN                     52.843    77.367    53.548    57.713    93.268    30.909    40.909    28.182    39.781    6719      4923      485       82        34        45        31        278
    COMBINED                           64.643    79.592    65.019    71.369    91.829    33.678    47.107    19.215    50.078    66884     26832     5951      352       163       228       93        1616
    
    Identity: exp15-pedestrian         IDF1      IDR       IDP       IDTP      IDFN      IDFP
    MOT17-04-FRCNN                     76.335    70.194    83.654    33382     14175     6523
    MOT17-05-FRCNN                     53.753    40.943    78.232    2832      4085      788
    MOT17-09-FRCNN                     70.919    64.094    79.372    3413      1912      887
    MOT17-10-FRCNN                     68.204    57.925    82.919    7437      5402      1532
    MOT17-11-FRCNN                     74.449    72.086    76.972    6802      2634      2035
    MOT17-13-FRCNN                     62.602    50.67     81.885    5899      5743      1305
    COMBINED                           71.768    63.772    82.055    59765     33951     13070
    
    Count: exp15-pedestrian            Dets      GT_Dets   IDs       GT_IDs
    MOT17-04-FRCNN                     39905     47557     140       83
    MOT17-05-FRCNN                     3620      6917      117       133
    MOT17-09-FRCNN                     4300      5325      54        26
    MOT17-10-FRCNN                     8969      12839     105       57
    MOT17-11-FRCNN                     8837      9436      153       75
    MOT17-13-FRCNN                     7204      11642     141       110
    COMBINED                           72835     93716     710       484
    

    Previews (MOT17-05):

    Without trajectories: strong_ocsort

    With trajectories : strong_ocsort_trajectories

    Blue circles are feature-matched tracks Green circles are trajectory-matched tracks White circles are unmatched tracks No byte association was seen in this sequence (should probably seldom happen for a large enough detector model, will happen more often for smaller detection models).

    If the resurrection system is turned on, you will see resurrected tracks as purple circles in the trajectory plot.

    opened by henriksod 2
  • Implemented prediction if no detections present

    Implemented prediction if no detections present

    Implemented predictions if no detections come from yolo

    HOTA: exp-pedestrian               HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT17-04-FRCNN                     61.223    59.34     63.701    64.651    75.783    68.253    80.509    80.925    64.126    81.961    75.043    61.506
    MOT17-05-FRCNN                     42.367    40.82     44.104    44.135    73.81     54.626    65.858    81.502    44.109    54.935    76.472    42.01
    MOT17-09-FRCNN                     58.03     61.61     54.702    67.461    80.114    60.43     79.769    85.656    60.737    72.13     81.815    59.013
    MOT17-10-FRCNN                     51.628    52.627    50.836    56.976    74.417    55.733    76.026    80.314    53.812    69.267    75.347    52.19
    MOT17-11-FRCNN                     62.068    58.834    65.678    71.629    71.774    72.308    83.066    86.635    68.58     74.617    83.102    62.008
    MOT17-13-FRCNN                     47.269    44.033    51.147    48.209    71.661    57.14     74.165    80.062    49.603    62.66     74.508    46.687
    COMBINED                           57.254    55.179    59.953    60.905    74.836    65.477    79.587    81.75     60.347    74.957    76.376    57.249
    
    CLEAR: exp-pedestrian              MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag
    MOT17-04-FRCNN                     69.87     78.439    70.002    77.656    91.028    50.602    33.735    15.663    53.126    36931     10626     3640      63        42        28        13        376
    MOT17-05-FRCNN                     47.636    78.725    48.605    54.2      90.643    24.06     60.15     15.789    36.105    3749      3168      387       67        32        80        21        153
    MOT17-09-FRCNN                     69.446    84.359    70.197    77.202    91.682    53.846    46.154    0         57.371    4111      1214      373       40        14        12        0         70
    MOT17-10-FRCNN                     65.558    76.879    66.189    71.376    93.225    31.579    61.404    7.0175    49.055    9164      3675      666       81        18        35        4         331
    MOT17-11-FRCNN                     61.615    85.232    61.944    80.871    81.034    49.333    37.333    13.333    49.672    7631      1805      1786      31        37        28        10        96
    MOT17-13-FRCNN                     53.659    76.647    54.338    60.806    90.386    31.818    40.909    27.273    39.459    7079      4563      753       79        35        45        30        221
    COMBINED                           64.769    79.171    65.154    73.269    90.029    36.777    47.107    16.116    49.508    68665     25051     7605      361       178       228       78        1247
    
    Identity: exp-pedestrian           IDF1      IDR       IDP       IDTP      IDFN      IDFP
    MOT17-04-FRCNN                     77.367    71.685    84.028    34091     13466     6480
    MOT17-05-FRCNN                     57.251    45.742    76.499    3164      3753      972
    MOT17-09-FRCNN                     71.506    65.859    78.211    3507      1818      977
    MOT17-10-FRCNN                     69.249    61.134    79.847    7849      4990      1981
    MOT17-11-FRCNN                     73.24     73.167    73.314    6904      2532      2513
    MOT17-13-FRCNN                     63.695    53.273    79.188    6202      5440      1630
    COMBINED                           72.614    65.855    80.919    61717     31999     14553
    
    Count: exp-pedestrian              Dets      GT_Dets   IDs       GT_IDs
    MOT17-04-FRCNN                     40571     47557     144       83
    MOT17-05-FRCNN                     4136      6917      117       133
    MOT17-09-FRCNN                     4484      5325      53        26
    MOT17-10-FRCNN                     9830      12839     105       57
    MOT17-11-FRCNN                     9417      9436      161       75
    MOT17-13-FRCNN                     7832      11642     134       110
    COMBINED                           76270     93716     714       484
    
    Stale 
    opened by henriksod 1
Releases(v8.0)
  • v8.0(Nov 30, 2022)

    The goal of this release is to transform the repo into a user-friendly tracking experiment platform by adding different tracking methods and evaluation support for different MOT datasets. I will continue adding SOTA tracking methods as they come out.

    Important updates

    • Tracking method selection
      • OCSORT added as a tracking option
      • ByteTrack added as a tracking option
    • Added evaluation support for MOT17 & MOT20
    • Added custom tracking dataset evaluation tutorial
    • Added to CI:
      • Model export testing
      • Tracking with exported model testing
      • Tracking with OCSORT testing
      • Tracking with ByteTrack testing
    • Less bloated README
    • Evaluation on specific GPUs and CPU
    • Update to Yolov5 v7
    • --vid-stride to process every nth frame now available
    • Experiment results:

    MOT17-train evaluation results

    The hyperparameters used for evaluation can be found under val.py. Notice that none of the models used during the evaluation has ever seen any of the MOT17 data and that our object detection model is a modest Yolov5m.

    HOTA: exp105-pedestrian            HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT17-04-ss                        60.908    59        63.405    64.121    76.02     68.898    78.923    80.981    63.714    81.25     75.158    61.066    
    MOT17-05-ss                        40.252    39.213    41.436    42.163    74.171    52.461    63.544    81.546    41.789    51.945    76.574    39.777    
    MOT17-09-ss                        56.907    60.309    53.739    65.712    80.477    59.512    79.408    85.724    59.415    70.643    81.881    57.843    
    MOT17-10-ss                        50.853    50.201    51.696    54.009    74.875    56.683    76.058    80.464    52.834    67.831    75.599    51.28     
    MOT17-11-ss                        62.797    58.699    67.396    70.483    72.798    74.55     82.031    86.688    68.912    75.266    83.145    62.58     
    MOT17-13-ss                        46.722    43.064    51.08     46.897    72.133    56.866    74.246    80.146    48.893    61.802    74.62     46.116    
    COMBINED                           56.852    54.364    60.008    59.706    75.249    66.14     78.487    81.823    59.774    74.144    76.496    56.717    
    
    CLEAR: exp105-pedestrian           MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT17-04-ss                        69.718    78.487    69.847    77.097    91.404    50.602    33.735    15.663    53.133    36665     10892     3448      61        42        28        13        549       
    MOT17-05-ss                        45.757    78.793    46.668    51.757    91.048    19.549    63.158    17.293    34.781    3580      3337      352       63        26        84        23        249       
    MOT17-09-ss                        67.925    84.484    68.732    75.192    92.088    50        50        0         56.258    4004      1321      344       43        13        13        0         141       
    MOT17-10-ss                        62.575    77.025    63.206    67.669    93.813    28.07     61.404    10.526    47.028    8688      4151      573       81        16        35        6         663       
    MOT17-11-ss                        61.753    85.318    62.081    79.451    82.06     46.667    40        13.333    50.088    7497      1939      1639      31        35        30        10        200       
    MOT17-13-ss                        52.534    76.716    53.247    59.131    90.95     30        41.818    28.182    38.766    6884      4758      685       83        33        46        31        321       
    COMBINED                           63.933    79.251    64.319    71.832    90.531    34.091    48.76     17.149    49.028    67318     26398     7041      362       165       236       83        2123      
    
    Identity: exp105-pedestrian        IDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT17-04-ss                        77.326    71.274    84.501    33896     13661     6217      
    MOT17-05-ss                        54.125    42.446    74.669    2936      3981      996       
    MOT17-09-ss                        70.485    64.019    78.404    3409      1916      939       
    MOT17-10-ss                        69.701    59.989    83.166    7702      5137      1559      
    MOT17-11-ss                        74.445    73.262    75.668    6913      2523      2223      
    MOT17-13-ss                        63.099    52.062    80.077    6061      5581      1508      
    COMBINED                           72.488    65.002    81.923    60917     32799     13442     
    
    Count: exp105-pedestrian           Dets      GT_Dets   IDs       GT_IDs    
    MOT17-04-ss                        40113     47557     131       83        
    MOT17-05-ss                        3932      6917      103       133       
    MOT17-09-ss                        4348      5325      52        26        
    MOT17-10-ss                        9261      12839     96        57        
    MOT17-11-ss                        9136      9436      152       75        
    MOT17-13-ss                        7569      11642     133       110       
    COMBINED                           74359     93716     667       484
    

    No performance boost this time, only that we started evaluating on MOT17

    Source code(tar.gz)
    Source code(zip)
  • v7.0(Sep 17, 2022)

    The goal of this release is to increase the deployment possibilities by enabling ReID model export to different frameworks. The ReID part of the project was enhanced by batched inferences for all the supported export frameworks. This results in big speedups, specially when the number of detected objects are large. I also started looking into more tracking methods with the idea of transforming the repo into a tracking experiment platform.

    Important updates

    • Added Windows testing to CI
    • Added Python eval script/deleted Bash eval script
    • Increased StrongSORT inference speed by batchifying the visual appearance inferences in the ReID multi-backend engine
    • Warmup added to ReID models
    • Publish StrongSORT vs BoTSORT vs OCSORT comparison
    • New best MOT16 performing ReID model in val.py (osnet_x1_0_dukemtmcreid.pt)
    • ReID model exports and batched inference support for the following frameworks:
    • ONNX
    • TensorRT
    • TorchScript
    • OpenVINO

    MOT16 Train evaluation results

    Relevant changed/used hparams: imgz 1280, crowdhuman_yolov5m, osnet_x1_0_dukemtmcreid, StrongSORT. Notice that none of the models used during the evaluation has ever seen any of the MOT16 data and that our object detection model is a modest Yolov5m.

    HOTA: StrongSORT                   HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT16-02                           38.665    36.637    40.944    38.537    76.614    44.547    74.684    81.639    39.705    49.256    77.503    38.175    
    MOT16-04                           60.283    58.906    62.211    63.644    76.869    66.581    80.092    81.303    62.872    79.943    75.772    60.574    
    MOT16-05                           38.966    39.128    38.924    42.247    74.007    50.935    61.954    82.021    40.537    50.046    77.021    38.546    
    MOT16-09                           54.379    57.758    51.227    67.172    73.999    62.039    71.768    85.541    58.648    67.299    81.843    55.08     
    MOT16-10                           51.081    51.565    50.736    55.758    74.916    56.163    74.611    80.794    53.183    68.005    76.058    51.723    
    MOT16-11                           63.351    60.472    66.585    72.742    73.149    74.295    79.782    86.921    69.584    75.419    83.756    63.169    
    MOT16-13                           47.139    43.125    51.925    46.561    73.727    58.249    74.375    80.826    49.113    60.92     76.073    46.343    
    COMBINED                           54.087    51.797    56.978    56.54     75.637    62.799    77.756    82.107    56.675    69.878    77.185    53.935    
    
    CLEAR: exp306-pedestrian           MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT16-02                           44.569    78.978    45.006    47.653    94.738    14.815    55.556    29.63     34.551    8498      9335      472       78        8         30        16        366       
    MOT16-04                           70.316    78.861    70.478    76.636    92.561    48.193    37.349    14.458    54.115    36446     11111     2929      77        40        31        12        485       
    MOT16-05                           45.321    79.635    46.436    51.76     90.673    27.2      54.4      18.4      34.78     3529      3289      363       76        34        68        23        205       
    MOT16-09                           62.792    84.37     63.192    76.983    84.807    52        48        0         50.76     4047      1210      725       21        13        12        0         72        
    MOT16-10                           63.817    77.592    64.377    69.403    93.248    37.037    57.407    5.5556    48.265    8549      3769      619       69        20        31        3         588       
    MOT16-11                           64.672    85.465    64.89     82.167    82.626    52.174    37.681    10.145    52.729    7538      1636      1585      20        36        26        7         109       
    MOT16-13                           53.022    77.397    53.528    58.341    92.38     28.972    43.925    27.103    39.835    6680      4770      551       58        31        47        29        305       
    COMBINED                           61.268    79.594    61.629    68.19     91.223    35.203    47.389    17.408    47.353    75287     35120     7244      399       182       245       90        2130      
    
    Identity: exp306-pedestrian        IDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT16-02                           50.77     38.154    75.853    6804      11029     2166      
    MOT16-04                           76.092    69.546    83.997    33074     14483     6301      
    MOT16-05                           51.055    40.1      70.247    2734      4084      1158      
    MOT16-09                           66.946    63.858    70.348    3357      1900      1415      
    MOT16-10                           68.361    59.62     80.105    7344      4974      1824      
    MOT16-11                           74.887    74.678    75.096    6851      2323      2272      
    MOT16-13                           64.001    52.21     82.672    5978      5472      1253      
    COMBINED                           68.563    59.907    80.142    66142     44265     16389
    

    Performance boost coming from new ReID model

    Source code(tar.gz)
    Source code(zip)
  • v6.0(Jun 9, 2022)

    Important updates

    StrongSORT implemented (https://arxiv.org/pdf/2202.13514.pdf)

    • stronger appearance descriptor (OSNet)
    • camera motion compensation (ECC)
    • NSA Kalman filter (NSA)
    • EMA feature updating mechanism (EMA)
    • matching with motion cost (MC)
    • abandone matching cascade in favor of a vanilla global linear matching (woC)

    Distance metric changed to cosine according to https://github.com/KaiyangZhou/deep-person-reid/issues/502 Complete track.py arguments refactor

    MOT16 Train evaluation results

    Relevant changed/used hparams: imgz 1280, crowdhuman_yolov5m, OSNet_x0_25_msmt17, StrongSORT. Notice that none of the models used during the evaluation has ever seen any of the MOT16 data and that our object detection model is a modest Yolov5m.

    HOTA: kf-pedestrian                HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT16-02                           36.417    36.273    36.741    38.196    76.251    39.184    77.59     81.577    37.433    46.154    77.361    35.705    
    MOT16-04                           59.324    58.508    60.726    63.282    76.509    64.924    79.5      81.075    61.927    78.835    75.37     59.418    
    MOT16-05                           40.098    37.281    43.23     39.956    74.391    51.86     69.098    81.876    41.554    51.282    77.049    39.512    
    MOT16-09                           53.181    57.122    49.57     65.872    74.664    54.868    78.804    85.721    57.137    65.772    82.193    54.06     
    MOT16-10                           49.705    49.813    49.762    53.696    74.831    53.942    77.666    80.652    51.686    65.955    75.773    49.976    
    MOT16-11                           62.896    59.81     66.362    71.318    73.696    72.965    82.899    86.879    68.784    74.954    83.531    62.61     
    MOT16-13                           45.203    41.504    49.567    44.959    72.74     54.969    74.378    80.493    47.163    59.49     75.279    44.784    
    COMBINED                           53.005    50.992    55.641    55.611    75.463    60.424    79.256    81.937    55.531    68.581    76.857    52.709    
    
    CLEAR: kf-pedestrian               MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT16-02                           43.56     78.641    44.451    47.272    94.369    16.667    53.704    29.63     33.463    8430      9403      503       159       9         29        16        377       
    MOT16-04                           69.786    78.669    69.956    76.334    92.289    49.398    34.94     15.663    53.503    36302     11255     3033      81        41        29        13        582       
    MOT16-05                           43.018    79.285    43.825    48.768    90.797    17.6      62.4      20        32.916    3325      3493      337       55        22        78        25        210       
    MOT16-09                           61.29     84.193    62.165    75.195    85.231    52        48        0         49.404    3953      1304      685       46        13        12        0         108       
    MOT16-10                           61.154    77.435    61.788    66.772    93.054    29.63     55.556    14.815    46.087    8225      4093      614       78        16        30        8         558       
    MOT16-11                           63.56     85.508    63.942    80.358    83.037    53.623    36.232    10.145    51.915    7372      1802      1506      35        37        25        7         145       
    MOT16-13                           50.629    77.098    51.397    56.603    91.578    26.168    44.86     28.972    37.666    6481      4969      596       88        28        48        31        291       
    COMBINED                           60.025    79.394    60.516    67.104    91.06     32.108    48.549    19.342    46.198    74088     36319     7274      542       166       251       100       2271      
    
    Identity: kf-pedestrian            IDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT16-02                           47.717    35.81     71.488    6386      11447     2547      
    MOT16-04                           75.047    68.56     82.891    32605     14952     6730      
    MOT16-05                           54.752    42.08     78.345    2869      3949      793       
    MOT16-09                           65.124    61.29     69.47     3222      2035      1416      
    MOT16-10                           66.758    57.331    79.896    7062      5256      1777      
    MOT16-11                           74.053    72.858    75.287    6684      2490      2194      
    MOT16-13                           60.474    48.926    79.158    5602      5848      1475      
    COMBINED                           67.195    58.357    79.189    64430     45977     16932     
    
    Count: kf-pedestrian               Dets      GT_Dets   IDs       GT_IDs    
    MOT16-02                           8933      17833     146       54        
    MOT16-04                           39335     47557     139       83        
    MOT16-05                           3662      6818      136       125       
    MOT16-09                           4638      5257      57        25        
    MOT16-10                           8839      12318     122       54        
    MOT16-11                           8878      9174      164       69        
    MOT16-13                           7077      11450     135       107       
    COMBINED                           81362     110407    899       517
    

    Performance boosts from updating DeepSORT to StrongSORT

    Bug fixes

    Confidences are now extracted correctly: https://github.com/mikel-brostrom/Yolov5_DeepSort_OSNet/issues/375

    Source code(tar.gz)
    Source code(zip)
  • v5.0(Apr 6, 2022)

    The goal with this release is to automatize the whole process of fetching and loading ReID models. Multi-cam tracking possibility was added by @hdnh2006 in https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/pull/284

    Important updates

    Multiple object-tracking on multiple simultaneous streams Tracking with the --save-vid flag on folders containing images now generate an .mp4 Automatic download of ReID models trained on different datasets Easier experimentation by setting the result folder name to the yolo and deep sort models used. The following can now be saved under this folder:

    • Tracking with --save-crop flag saves crops associated to each class and ID for each stream
    • Tracking with the --save-vid generates an .mp4 for each stream
    • Tracking with the --save-txt saves a txt files for each stream

    Bug fixes

    Loading of custom ReID model by specifying the path to it

    MOT16 Train evaluation results

    Relevant changed/used hparams: imgz 1280, crowdhuman_yolov5m, OSNet_x0_25_msmt17, alpha=0. Notice that none of the models used during the evaluation has ever seen any of the MOT16 data.

    HOTA: rep_1280-pedestrian          HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      RHOTA     HOTA(0)   LocA(0)   HOTALocA(0)
    MOT16-02                           33.472    36.215    31.143    38.083    76.454    33.609    74.943    81.576    34.395    42.156    77.46     32.654    
    MOT16-04                           58.508    58.422    59.167    63.228    76.489    63.097    79.334    81.143    61.096    77.915    75.329    58.693    
    MOT16-05                           39.479    36.937    42.304    39.621    73.848    49.756    70.33     81.457    40.932    51.081    76.254    38.952    
    MOT16-09                           51.897    56.914    47.372    65.839    74.434    51.6      79.236    85.818    55.841    64.133    82.232    52.737    
    MOT16-10                           46.407    49.419    43.764    53.425    74.242    48.223    74.634    80.347    48.344    62.128    75.134    46.679    
    MOT16-11                           58.33     59.366    57.502    70.96     73.111    65.957    77.009    86.464    63.869    69.923    82.949    58        
    MOT16-13                           44.229    41.439    47.558    44.837    72.748    51.187    77.022    80.409    46.127    57.94     75.221    43.583    
    COMBINED                           51.286    50.839    52.219    55.475    75.31     56.777    78.178    81.874    53.742    66.59     76.683    51.063    
    
    CLEAR: rep_1280-pedestrian         MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT16-02                           43.773    78.642    44.44     47.126    94.608    16.667    51.852    31.481    33.707    8404      9429      479       119       9         28        17        366       
    MOT16-04                           69.634    78.654    69.803    76.233    92.221    48.193    36.145    15.663    53.362    36254     11303     3058      80        40        30        13        536       
    MOT16-05                           42.754    78.816    43.532    48.592    90.569    17.6      58.4      24        32.461    3313      3505      345       53        22        73        30        229       
    MOT16-09                           61.119    84.468    61.746    75.1      84.903    52        48        0         49.454    3948      1309      702       33        13        12        0         107       
    MOT16-10                           60.643    77.175    61.244    66.602    92.554    31.481    55.556    12.963    45.441    8204      4114      660       74        17        30        7         580       
    MOT16-11                           63.571    84.937    63.898    80.477    82.918    53.623    36.232    10.145    51.449    7383      1791      1521      30        37        25        7         148       
    MOT16-13                           51.022    76.912    51.572    56.603    91.838    28.037    42.056    29.907    37.954    6481      4969      576       63        30        45        32        277       
    COMBINED                           59.955    79.281    60.364    67.013    90.974    32.495    47.002    20.503    46.07     73987     36420     7341      452       168       243       106       2243      
    
    Identity: rep_1280-pedestrian      IDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT16-02                           43.742    32.765    65.777    5843      11990     3040      
    MOT16-04                           73.02     66.69     80.678    31716     15841     7596      
    MOT16-05                           55.116    42.344    78.923    2887      3931      771       
    MOT16-09                           62.764    59.14     66.86     3109      2148      1541      
    MOT16-10                           61.175    52.598    73.093    6479      5839      2385      
    MOT16-11                           66.777    65.795    67.79     6036      3138      2868      
    MOT16-13                           60.323    48.751    79.099    5582      5868      1475      
    COMBINED                           64.31     55.841    75.807    61652     48755     19676     
    
    Count: rep_1280-pedestrian         Dets      GT_Dets   IDs       GT_IDs    
    MOT16-02                           8883      17833     142       54        
    MOT16-04                           39312     47557     138       83        
    MOT16-05                           3658      6818      134       125       
    MOT16-09                           4650      5257      57        25        
    MOT16-10                           8864      12318     118       54        
    MOT16-11                           8904      9174      157       69        
    MOT16-13                           7057      11450     131       107       
    COMBINED                           81328     110407    877       517
    

    Performance boost coming from evaluating on 1280 image size

    Source code(tar.gz)
    Source code(zip)
  • v4.0(Dec 22, 2021)

    The goal with this release is to add different possibilities for ReID models. A lot has happened in the filed since DeepSORT was first release, this is an attempt to keep up with the latest advancements in ReID methods.

    Important updates

    • Enable Yolov5 model ensembling
    • Update track.py to comply with the new yolov5 standards
    • Implementation of Lambda as per Eq(5) in the paper, based on https://github.com/michael-camilleri/deep_sort
    • Added different ReID model options (https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO)

    Bug fixes

    • Limit high performance libraries threads to 1 to avoid that the tracker uses all the CPUs (https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/48)
    • Default half precision inference to false for visualization on windows (https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/206)
    • Fix MOT index off by one in txt files (https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/217)

    MOT16 Train evaluation results

    Relevant changed/used hparams: imgz 640, standard DeepSORT ReIDmodel. Notice that none of the models used during the evaluation has ever seen any of the MOT16 data.

    CLEAR: osnet_ain_x1_0_yolov5_lambda02-pedestrianMOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT16-02                           33.472    78.434    33.943    36.191    94.15     16.667    37.037    46.296    25.666    6454      11379     401       84        9         20        25        265       
    MOT16-04                           63.852    76.516    64.054    71.359    90.714    40.964    42.169    16.867    47.094    33936     13621     3474      96        34        35        14        534       
    MOT16-05                           58.199    78.433    59.226    68.568    88.008    27.2      57.6      15.2      43.411    4675      2143      637       70        34        72        19        182       
    MOT16-09                           62.203    83.786    63.002    74.986    86.22     48        48        4         50.045    3942      1315      630       42        12        12        1         115       
    MOT16-10                           53.613    77.071    54.092    57.761    94.027    25.926    48.148    25.926    40.369    7115      5203      452       59        14        26        14        386       
    MOT16-11                           66.045    85.201    66.318    77.556    87.343    50.725    36.232    13.043    54.568    7115      2059      1031      25        35        25        9         126       
    MOT16-13                           40.367    75.18     40.795    44.734    91.907    16.822    45.794    37.383    29.264    5122      6328      451       49        18        49        40        282       
    COMBINED                           55.122    78.109    55.506    61.915    90.62     30.174    46.228    23.598    41.567    68359     42048     7076      425       156       239       122       1890      
    
    Identity: osnet_ain_x1_0_yolov5_lambda02-pedestrianIDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT16-02                           39.015    27.006    70.255    4816      13017     2039      
    MOT16-04                           65.92     58.887    74.86     28005     19552     9405      
    MOT16-05                           68.887    61.279    78.652    4178      2640      1134      
    MOT16-09                           57.422    53.681    61.724    2822      2435      1750      
    MOT16-10                           58.597    47.297    76.992    5826      6492      1741      
    MOT16-11                           62.009    58.535    65.922    5370      3804      2776      
    MOT16-13                           52.611    39.109    80.352    4478      6972      1095      
    COMBINED                           59.723    50.264    73.567    55495     54912     19940     
    
    Count: osnet_ain_x1_0_yolov5_lambda02-pedestrianDets      GT_Dets   IDs       GT_IDs    
    MOT16-02                           6855      17833     110       54        
    MOT16-04                           37410     47557     155       83        
    MOT16-05                           5312      6818      158       125       
    MOT16-09                           4572      5257      50        25        
    MOT16-10                           7567      12318     93        54        
    MOT16-11                           8146      9174      125       69        
    MOT16-13                           5573      11450     98        107       
    COMBINED                           75435     110407    789       517
    
    Source code(tar.gz)
    Source code(zip)
  • v3.0(Sep 15, 2021)

    The goal with this release is to automate the whole evaluation process using the official MOTXX evaluation data and tools.

    Major changes

    • Added colab notebook
    • Added bash script for automatically handling all the MOT16 evaluation process (data download, wights download, video generation and placing in right folder...)
    • Update track.py to comply with the new yolov5 standards
    • Added id, class and confidence to plotted bboxes
    • Added LICENSE

    Bug fix

    • Fix bad initial kf predictions for new objects in the field of view according to https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/166
    • Fix img input sizes bug according to https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/174
    • Class updated for each track after each detection instead of only in the initialization phase. This led to wrong class ID being displayed for each bbox

    MOT16 train evaluation

    CLEAR: ch_yolov5m_deep_sort-pedestrianMOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
    MOT16-02                           33.887    77.114    34.397    38.109    91.124    16.667    38.889    44.444    25.165    6796      11037     662       91        9         21        24        198       
    MOT16-04                           63.831    76.326    63.997    72.149    89.848    40.964    39.759    19.277    46.75     34312     13245     3877      79        34        33        16        369       
    MOT16-05                           56.307    76.523    57.334    71.414    83.531    40        49.6      10.4      39.541    4869      1949      960       70        50        62        13        150       
    MOT16-09                           62.507    81.999    63.401    77.268    84.784    52        40        8         48.598    4062      1195      729       47        13        10        2         70        
    MOT16-10                           52.703    75.324    53.239    59.685    90.253    27.778    46.296    25.926    37.975    7352      4966      794       66        15        25        14        272       
    MOT16-11                           64.138    84.251    64.465    79.191    84.32     50.725    34.783    14.493    51.666    7265      1909      1351      30        35        24        10        83        
    MOT16-13                           30.332    68.89     30.847    41.956    79.065    9.3458    52.336    38.318    17.279    4804      6646      1272      59        10        56        41        281       
    COMBINED                           53.776    76.957    54.177    62.913    87.807    32.108    44.681    23.211    39.28     69460     40947     9645      442       166       231       120       1423      
    
    Identity: ch_yolov5m_deep_sort-pedestrianIDF1      IDR       IDP       IDTP      IDFN      IDFP      
    MOT16-02                           36.361    25.784    61.652    4598      13235     2860      
    MOT16-04                           67.341    60.708    75.6      28871     18686     9318      
    MOT16-05                           39.583    36.712    42.94     2503      4315      3326      
    MOT16-09                           50.378    48.145    52.828    2531      2726      2260      
    MOT16-10                           54.251    45.064    68.144    5551      6767      2595      
    MOT16-11                           47.768    46.316    49.315    4249      4925      4367      
    MOT16-13                           39.393    30.148    56.814    3452      7998      2624      
    COMBINED                           54.619    46.877    65.426    51755     58652     27350     
    
    Count: ch_yolov5m_deep_sort-pedestrianDets      GT_Dets   IDs       GT_IDs    
    MOT16-02                           7458      17833     50        54        
    MOT16-04                           38189     47557     99        83        
    MOT16-05                           5829      6818      42        125       
    MOT16-09                           4791      5257      21        25        
    MOT16-10                           8146      12318     46        54        
    MOT16-11                           8616      9174      49        69        
    MOT16-13                           6076      11450     54        107       
    COMBINED                           79105     110407    361       517
    
    
    
    Source code(tar.gz)
    Source code(zip)
    ckpt.t7(43.90 MB)
  • v.2.0(Jun 17, 2021)

    The goal with this release is to create a CI pipeline for track.py. Automatic weight download for DeepSORT.

    Important updates

    • MOT16 Evaluation based on #73
    • Adapted track script to new yolov5 v5.0 standards
    • README update explaining how to track different classes
    • CI pipeline for testing CPU inference added
    • Automatic weight downloading
    Source code(tar.gz)
    Source code(zip)
    ckpt.t7(43.90 MB)
    test.avi(4.85 MB)
  • v1.0(Dec 4, 2020)

    The goal with this release is to make a 2 stage tracker based on Yolov5 publicly available for the first time ever (according to my personal search done on github 😅)

    Major updates

    • Basic tracking working: Yolov5 passes detections to DeepSORT which handles the tracking .
    • Updated tracker when no detections. Based on https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/issues/21
    • Adapted track script to new yolov5 v4.0 standards

    Bug fixes

    • PyTorch 1.7 compatibility update

    Models

    Source code(tar.gz)
    Source code(zip)
Owner
Mike
Mike
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022
Object tracking using YOLO and a tracker(KCF, MOSSE, CSRT) in openCV

Object tracking using YOLO and a tracker(KCF, MOSSE, CSRT) in openCV File YOLOv3 weight can be downloaded

Ngoc Quyen Ngo 2 Mar 27, 2022
Real Time Object Detection and Classification using Yolo Algorithm.

Real time Object detection & Classification using YOLO algorithm. Real Time Object Detection and Classification using Yolo Algorithm. What is Object D

Ketan Chawla 1 Apr 17, 2022
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
Implementation for the paper 'YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs'

YOLO-ReT This is the original implementation of the paper: YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs. Prakhar Ganesh, Ya

null 69 Oct 19, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

null 26 Dec 13, 2022
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022
Yolo algorithm for detection + centroid tracker to track vehicles

Vehicle Tracking using Centroid tracker Algorithm used : Yolo algorithm for detection + centroid tracker to track vehicles Backend : opencv and python

null 6 Dec 21, 2022
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 9, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy  N. Monday 3 Feb 15, 2022
Object detection using yolo-tiny model and opencv used as backend

Object detection Algorithm used : Yolo algorithm Backend : opencv Library required: opencv = 4.5.4-dev' Quick Overview about structure 1) main.py Load

null 2 Jul 6, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
using yolox+deepsort for object-tracker

YOLOX_deepsort_tracker yolox+deepsort实现目标跟踪 最新的yolox尝尝鲜~~(yolox正处在频繁更新阶段,因此直接链接yolox仓库作为子模块) Install Clone the repository recursively: git clone --rec

null 245 Dec 26, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 3, 2021
Vehicle Detection Using Deep Learning and YOLO Algorithm

VehicleDetection Vehicle Detection Using Deep Learning and YOLO Algorithm Dataset take or find vehicle images for create a special dataset for fine-tu

Maryam Boneh 96 Jan 5, 2023
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 9, 2023
Object detection (YOLO) with pytorch, OpenCV and python

Real Time Object/Face Detection Using YOLO-v3 This project implements a real time object and face detection using YOLO algorithm. You only look once,

null 1 Aug 4, 2022