ByteTrack超详细教程!训练自己的数据集&&摄像头实时检测跟踪

Overview

#ByteTrack训练自己数据集详细教程!!

一、配置环境

1. Installing on the host machine

Step1. Install ByteTrack.

git clone https://github.com/Double-zh/ByteTrack.git
cd ByteTrack
pip3 install -r requirements.txt
python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

Step3. Others

pip3 install cython_bbox

2. Docker build

docker build -t bytetrack:latest .

# Startup sample
mkdir -p pretrained && \
mkdir -p YOLOX_outputs && \
xhost +local: && \
docker run --gpus all -it --rm \
-v $PWD/pretrained:/workspace/ByteTrack/pretrained \
-v $PWD/datasets:/workspace/ByteTrack/datasets \
-v $PWD/YOLOX_outputs:/workspace/ByteTrack/YOLOX_outputs \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
bytetrack:latest

二、准备VOC数据集和下载预训练模型

### 1. datasets
           └——————VOCdevkit
           |         └——————VOC2012
           |                   └——————Annotations
           |                   └——————ImageSets
           |                                 └——————Main
           |                   └——————JPEGImages
                               └—————— divide_dataset.py

2. Download pretrained model

The COCO pretrained YOLOX model can be downloaded from their [model zoo](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.0). After downloading the pretrained models, you can put them under 
   
    /pretrained.

   

三、准备模型配置文件{create a Exp file for your dataset && modify get_data_loader and get_eval_loader in your Exp file}

根据需求修改文件yolox_voc_s_ZZH.py的种类数,在路径"exps/example/custom/"文件夹下

class Exp(MyExp):
    def __init__(self):
        super(Exp, self).__init__()
        self.num_classes = 2 #在这进行修改
        self.depth = 0.33
        self.width = 0.50
        self.warmup_epochs = 1

四、Training

Train with custom dataset

cd <ByteTrack_HOME>
python3 train.py -f exps/example/custom/yolox_voc_s_ZZH.py -d 1 -b 1 --fp16 -o -c pretrained/yolox_s.pth

五、Demo

1. 调用摄像头进行实时检测跟踪,并保存结果

cd <ByteTrack_HOME>

python3 ZZH_track.py webcam -f exps/example/custom/yolox_voc_s_ZZH.py -c YOLOX_outputs/yolox_voc_s_ZZH/latest_ckpt.pth.tar --fp16 --fuse --save_result

2. 对视频进行检测跟踪,并保存结果

取消注释ZZH_track.py第227行代码,并注释第228行代码

```shell
cd 
   
    

python3 ZZH_track.py video -f exps/example/custom/yolox_voc_s_ZZH.py -c YOLOX_outputs/yolox_voc_s_ZZH/latest_ckpt.pth.tar --fp16 --fuse --save_result

   

六、Deploy

  1. ONNX export and ONNXRuntime
  2. TensorRT in Python
  3. TensorRT in C++
  4. ncnn in C++

七、Citation

@article{zhang2021bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2110.06864},
  year={2021}
}

八、Acknowledgement

A large part of the code is borrowed from YOLOX, FairMOT, TransTrack and JDE-Cpp. Many thanks for their wonderful works.

You might also like...
Comments
  • 训练这样是对的吗

    训练这样是对的吗

    2022-01-10 19:17:14 | INFO | yolox.core.trainer:242 - epoch: 1/300, iter: 100/10261, mem: 9596Mb, iter_time: 0.802s, data_time: 0.601s, total_loss: 9.537, iou_loss: 2.258, l1_loss: 0.000, conf_loss: 4.065, cls_loss: 3.214, lr: 1.187e-07, size: 448, ETA: 34 days, 0:58:14

    opened by xiaoxiaolu1111 0
  • 怎么视频只跟踪一帧就退出了

    怎么视频只跟踪一帧就退出了

    2022-05-10 13:10:34.235 | INFO | main:main:316 - Model Summary: Params: 8.97M, Gflops: 26.81 2022-05-10 13:10:34.416 | INFO | main:main:327 - loading checkpoint 2022-05-10 13:10:34.551 | INFO | main:main:331 - loaded checkpoint done. 2022-05-10 13:10:34.551 | INFO | main:main:334 - \tFusing model... D:\Anaconda\envs\bytetrack\lib\site-packages\torch_tensor.py:1104: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at C:\cb\pytorch_1000000000000\work\build\aten\src\ATen/core/TensorBody.h:475.) return self._grad 2022-05-10 13:10:36.218 | INFO | main:imageflow_demo:242 - video save_path is F:\ByteTrackbieren\videos\palace.mp4 2022-05-10 13:10:36.221 | INFO | main:imageflow_demo:252 - Processing frame 0 (100000.00 fps) 2022-05-10 13:10:37.903 | INFO | main:imageflow_demo:252 - Processing frame 1 (0.62 fps)

    opened by Lemon51 0
  • ids change easily

    ids change easily

    Thank you for your great work. Could you please help me that I found two problems when running YOLOX_tiny and Byte for tracking. One is that if a target is blocked by a large area, its id will change; the other is that for the same target, just after only disappearing for 10 frames, it is detected that the id of the target will change. How to optimize this situation?

    opened by Mobu59 1
  • Insight into training dataset

    Insight into training dataset

    I would like some clarification regarding the custom dataset format. When you say annotations, what is the format for each file? Could you please let me know? Also, could you give me an idea for an annotation tool to label my data in this particular format?

    A link to a sample custom dataset would be really helpful. Thank you for your time!

    opened by rohan1198 0
Owner
Double-zh
Double-zh