YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Overview

Yolo v4, v3 and v2 for Windows and Linux

(neural networks for object detection)

Paper YOLO v4: https://arxiv.org/abs/2004.10934

Paper Scaled YOLO v4: * CVPR 2021: use to reproduce results: ScaledYOLOv4

More details in articles on medium:

Manual: https://github.com/AlexeyAB/darknet/wiki

Discussion:

About Darknet framework: http://pjreddie.com/darknet/

Darknet Continuous Integration CircleCI Contributors License: Unlicense DOI arxiv.org arxiv.org colab colab

Darknet Logo

scaled_yolov4 AP50:95 - FPS (Tesla V100) Paper: https://arxiv.org/abs/2011.08036


modern_gpus AP50:95 / AP50 - FPS (Tesla V100) Paper: https://arxiv.org/abs/2004.10934

tkDNN-TensorRT accelerates YOLOv4 ~2x times for batch=1 and 3x-4x times for batch=4.

GeForce RTX 2080 Ti

Network Size Darknet, FPS (avg) tkDNN TensorRT FP32, FPS tkDNN TensorRT FP16, FPS OpenCV FP16, FPS tkDNN TensorRT FP16 batch=4, FPS OpenCV FP16 batch=4, FPS tkDNN Speedup
320 100 116 202 183 423 430 4.3x
416 82 103 162 159 284 294 3.6x
512 69 91 134 138 206 216 3.1x
608 53 62 103 115 150 150 2.8x
Tiny 416 443 609 790 773 1774 1353 3.5x
Tiny 416 CPU Core i7 7700HQ 3.4 - - 42 - 39 12x

Youtube video of results

Yolo v4 Scaled Yolo v4

Others: https://www.youtube.com/user/pjreddie/videos

How to evaluate AP of YOLOv4 on the MS COCO evaluation server

  1. Download and unzip test-dev2017 dataset from MS COCO server: http://images.cocodataset.org/zips/test2017.zip
  2. Download list of images for Detection tasks and replace the paths with yours: https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt
  3. Download yolov4.weights file 245 MB: yolov4.weights (Google-drive mirror yolov4.weights )
  4. Content of the file cfg/coco.data should be
classes= 80
train  = <replace with your path>/trainvalno5k.txt
valid = <replace with your path>/testdev2017.txt
names = data/coco.names
backup = backup
eval=coco
  1. Create /results/ folder near with ./darknet executable file
  2. Run validation: ./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights
  3. Rename the file /results/coco_results.json to detections_test-dev2017_yolov4_results.json and compress it to detections_test-dev2017_yolov4_results.zip
  4. Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox)

How to evaluate FPS of YOLOv4 on GPU

  1. Compile Darknet with GPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1 in the Makefile
  2. Download yolov4.weights file 245 MB: yolov4.weights (Google-drive mirror yolov4.weights )
  3. Get any .avi/.mp4 video file (preferably not more than 1920x1080 to avoid bottlenecks in CPU performance)
  4. Run one of two commands and look at the AVG FPS:
  • include video_capturing + NMS + drawing_bboxes: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -dont_show -ext_output
  • exclude video_capturing + NMS + drawing_bboxes: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -benchmark

Pre-trained models

There are weights-file for different cfg-files (trained for MS COCO dataset):

FPS on RTX 2070 (R) and Tesla V100 (V):

CLICK ME - Yolo v3 models
CLICK ME - Yolo v2 models

Put it near compiled: darknet.exe

You can get cfg-files by path: darknet/cfg/

Requirements for Windows, Linux and macOS

Yolo v4 in other frameworks

Datasets

  • MS COCO: use ./scripts/get_coco_dataset.sh to get labeled MS COCO detection dataset
  • OpenImages: use python ./scripts/get_openimages_dataset.py for labeling train detection dataset
  • Pascal VOC: use python ./scripts/voc_label.py for labeling Train/Test/Val detection datasets
  • ILSVRC2012 (ImageNet classification): use ./scripts/get_imagenet_train.sh (also imagenet_label.sh for labeling valid set)
  • German/Belgium/Russian/LISA/MASTIF Traffic Sign Datasets for Detection - use this parsers: https://github.com/angeligareta/Datasets2Darknet#detection-task
  • List of other datasets: https://github.com/AlexeyAB/darknet/tree/master/scripts#datasets

Improvements in this repository

  • developed State-of-the-Art object detector YOLOv4
  • added State-of-Art models: CSP, PRN, EfficientNet
  • added layers: [conv_lstm], [scale_channels] SE/ASFF/BiFPN, [local_avgpool], [sam], [Gaussian_yolo], [reorg3d] (fixed [reorg]), fixed [batchnorm]
  • added the ability for training recurrent models (with layers conv-lstm[conv_lstm]/conv-rnn[crnn]) for accurate detection on video
  • added data augmentation: [net] mixup=1 cutmix=1 mosaic=1 blur=1. Added activations: SWISH, MISH, NORM_CHAN, NORM_CHAN_SOFTMAX
  • added the ability for training with GPU-processing using CPU-RAM to increase the mini_batch_size and increase accuracy (instead of batch-norm sync)
  • improved binary neural network performance 2x-4x times for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) : https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg
  • improved neural network performance ~7% by fusing 2 layers into 1: Convolutional + Batch-norm
  • improved performance: Detection 2x times, on GPU Volta/Turing (Tesla V100, GeForce RTX, ...) using Tensor Cores if CUDNN_HALF defined in the Makefile or darknet.sln
  • improved performance ~1.2x times on FullHD, ~2x times on 4K, for detection on the video (file/stream) using darknet detector demo...
  • improved performance 3.5 X times of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta
  • improved performance of detection and training on Intel CPU with AVX (Yolo v3 ~85%)
  • optimized memory allocation during network resizing when random=1
  • optimized GPU initialization for detection - we use batch=1 initially instead of re-init with batch=1
  • added correct calculation of mAP, F1, IoU, Precision-Recall using command darknet detector map...
  • added drawing of chart of average-Loss and accuracy-mAP (-map flag) during training
  • run ./darknet detector demo ... -json_port 8070 -mjpeg_port 8090 as JSON and MJPEG server to get results online over the network by using your soft or Web-browser
  • added calculation of anchors for training
  • added example of Detection and Tracking objects: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp
  • run-time tips and warnings if you use incorrect cfg-file or dataset
  • added support for Windows
  • many other fixes of code...

And added manual - How to train Yolo v4-v2 (to detect your custom objects)

Also, you might be interested in using a simplified repository where is implemented INT8-quantization (+30% speedup and -1% mAP reduced): https://github.com/AlexeyAB/yolo2_light

How to use on the command line

If you use build.ps1 script or the makefile (Linux only) you will find darknet in the root directory.

If you use the deprecated Visual Studio solutions, you will find darknet in the directory \build\darknet\x64.

If you customize build with CMake GUI, darknet executable will be installed in your preferred folder.

  • Yolo v4 COCO - image: ./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25
  • Output coordinates of objects: ./darknet detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
  • Yolo v4 COCO - video: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
  • Yolo v4 COCO - WebCam 0: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
  • Yolo v4 COCO for net-videocam - Smart WebCam: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg
  • Yolo v4 - save result videofile res.avi: ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -out_filename res.avi
  • Yolo v3 Tiny COCO - video: ./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4
  • JSON and MJPEG server that allows multiple connections from your soft or Web-browser ip-address:8070 and 8090: ./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output
  • Yolo v3 Tiny on GPU #1: ./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4
  • Alternative method Yolo v3 COCO - image: ./darknet detect cfg/yolov4.cfg yolov4.weights -i 0 -thresh 0.25
  • Train on Amazon EC2, to see mAP & Loss-chart using URL like: http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090 in the Chrome/Firefox (Darknet should be compiled with OpenCV): ./darknet detector train cfg/coco.data yolov4.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
  • 186 MB Yolo9000 - image: ./darknet detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights
  • Remember to put data/9k.tree and data/coco9k.map under the same folder of your app if you use the cpp api to build an app
  • To process a list of images data/train.txt and save results of detection to result.json file use: ./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output -dont_show -out result.json < data/train.txt
  • To process a list of images data/train.txt and save results of detection to result.txt use: ./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -dont_show -ext_output < data/train.txt > result.txt
  • Pseudo-labelling - to process a list of images data/new_train.txt and save results of detection in Yolo training format for each image as label <image_name>.txt (in this way you can increase the amount of training data) use: ./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
  • To calculate anchors: ./darknet detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
  • To check accuracy mAP@IoU=50: ./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • To check accuracy mAP@IoU=75: ./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75
For using network video-camera mjpeg-stream with any Android smartphone
  1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam

  2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB

  3. Start Smart WebCam on your phone

  4. Replace the address below, on shown in the phone application (Smart WebCam) and launch:

  • Yolo v4 COCO-model: ./darknet detector demo data/coco.data yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0

How to compile on Linux/macOS (using CMake)

The CMakeLists.txt will attempt to find installed optional dependencies like CUDA, cudnn, ZED and build against those. It will also create a shared object library file to use darknet for code development.

To update CMake on Ubuntu, it's better to follow guide here: https://apt.kitware.com/ or https://cmake.org/download/

git clone https://github.com/AlexeyAB/darknet
cd darknet
mkdir build_release
cd build_release
cmake ..
cmake --build . --target install --parallel 8

Using also PowerShell

Install: Cmake, CUDA, cuDNN How to install dependencies

Install powershell for your OS (Linux or MacOS) (guide here).

Open PowerShell type these commands

git clone https://github.com/AlexeyAB/darknet
cd darknet
./build.ps1 -UseVCPKG -EnableOPENCV -EnableCUDA -EnableCUDNN
  • remove options like -EnableCUDA or -EnableCUDNN if you are not interested into
  • remove option -UseVCPKG if you plan to manually provide OpenCV library to darknet or if you do not want to enable OpenCV integration
  • add option -EnableOPENCV_CUDA if you want to build OpenCV with CUDA support - very slow to build! (requires -UseVCPKG)

If you open the build.ps1 script at the beginning you will find all available switches.

How to compile on Linux (using make)

Just do make in the darknet directory. (You can try to compile and run it on Google Colab in cloud link (press «Open in Playground» button at the top-left corner) and watch the video link ) Before make, you can set such options in the Makefile: link

  • GPU=1 to build with CUDA to accelerate by using GPU (CUDA should be in /usr/local/cuda)
  • CUDNN=1 to build with cuDNN v5-v7 to accelerate training by using GPU (cuDNN should be in /usr/local/cudnn)
  • CUDNN_HALF=1 to build for Tensor Cores (on Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x
  • OPENCV=1 to build with OpenCV 4.x/3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-cams
  • DEBUG=1 to build debug version of Yolo
  • OPENMP=1 to build with OpenMP support to accelerate Yolo by using multi-core CPU
  • LIBSO=1 to build a library darknet.so and binary runnable file uselib that uses this library. Or you can try to run so LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4 How to use this SO-library from your own code - you can look at C++ example: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp or use in such a way: LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights test.mp4
  • ZED_CAMERA=1 to build a library with ZED-3D-camera support (should be ZED SDK installed), then run LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov4.cfg yolov4.weights zed_camera
  • You also need to specify for which graphics card the code is generated. This is done by setting ARCH=. If you use a never version than CUDA 11 you further need to edit line 20 from Makefile and remove -gencode arch=compute_30,code=sm_30 \ as Kepler GPU support was dropped in CUDA 11. You can also drop the general ARCH= and just uncomment ARCH= for your graphics card.

How to compile on Windows (using CMake)

Requires:

In Windows:

  • Start (button) -> All programs -> CMake -> CMake (gui) ->

  • look at image In CMake: Enter input path to the darknet Source, and output path to the Binaries -> Configure (button) -> Optional platform for generator: x64 -> Finish -> Generate -> Open Project ->

  • in MS Visual Studio: Select: x64 and Release -> Build -> Build solution

  • find the executable file darknet.exe in the output path to the binaries you specified

x64 and Release

How to compile on Windows (using vcpkg)

This is the recommended approach to build Darknet on Windows.

  1. Install Visual Studio 2017 or 2019. In case you need to download it, please go here: Visual Studio Community. Remember to install English language pack, this is mandatory for vcpkg!

  2. Install CUDA enabling VS Integration during installation.

  3. Open Powershell (Start -> All programs -> Windows Powershell) and type these commands:

Set-ExecutionPolicy unrestricted -Scope CurrentUser -Force
git clone https://github.com/AlexeyAB/darknet
cd darknet
.\build.ps1 -UseVCPKG -EnableOPENCV -EnableCUDA -EnableCUDNN

(add option -EnableOPENCV_CUDA if you want to build OpenCV with CUDA support - very slow to build! - or remove options like -EnableCUDA or -EnableCUDNN if you are not interested in them). If you open the build.ps1 script at the beginning you will find all available switches.

How to train with multi-GPU

  1. Train it first on 1 GPU for like 1000 iterations: darknet.exe detector train cfg/coco.data cfg/yolov4.cfg yolov4.conv.137

  2. Then stop and by using partially-trained model /backup/yolov4_1000.weights run training with multigpu (up to 4 GPUs): darknet.exe detector train cfg/coco.data cfg/yolov4.cfg /backup/yolov4_1000.weights -gpus 0,1,2,3

If you get a Nan, then for some datasets better to decrease learning rate, for 4 GPUs set learning_rate = 0,00065 (i.e. learning_rate = 0.00261 / GPUs). In this case also increase 4x times burn_in = in your cfg-file. I.e. use burn_in = 4000 instead of 1000.

https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ

How to train (to detect your custom objects)

(to train old Yolo v2 yolov2-voc.cfg, yolov2-tiny-voc.cfg, yolo-voc.cfg, yolo-voc.2.0.cfg, ... click by the link)

Training Yolo v4 (and v3):

  1. For training cfg/yolov4-custom.cfg download the pre-trained weights-file (162 MB): yolov4.conv.137 (Google drive mirror yolov4.conv.137 )
  2. Create file yolo-obj.cfg with the same content as in yolov4-custom.cfg (or copy yolov4-custom.cfg to yolo-obj.cfg) and:

So if classes=1 then should be filters=18. If classes=2 then write filters=21. (Do not write in the cfg-file: filters=(classes + 5)x3)

(Generally filters depends on the classes, coords and number of masks, i.e. filters=(classes + coords + 1)*<number of mask>, where mask is indices of anchors. If mask is absence, then filters=(classes + coords + 1)*num)

So for example, for 2 objects, your file yolo-obj.cfg should differ from yolov4-custom.cfg in such lines in each of 3 [yolo]-layers:

[convolutional]
filters=21

[region]
classes=2
  1. Create file obj.names in the directory build\darknet\x64\data\, with objects names - each in new line
  2. Create file obj.data in the directory build\darknet\x64\data\, containing (where classes = number of objects):
classes = 2
train  = data/train.txt
valid  = data/test.txt
names = data/obj.names
backup = backup/
  1. Put image-files (.jpg) of your objects in the directory build\darknet\x64\data\obj\
  2. You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark

It will create .txt-file for each .jpg-image-file - in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line:

<object-class> <x_center> <y_center> <width> <height>

Where:

  • <object-class> - integer object number from 0 to (classes-1)

  • <x_center> <y_center> <width> <height> - float values relative to width and height of image, it can be equal from (0.0 to 1.0]

  • for example: <x> = <absolute_x> / <image_width> or <height> = <absolute_height> / <image_height>

  • attention: <x_center> <y_center> - are center of rectangle (are not top-left corner)

    For example for img1.jpg you will be created img1.txt containing:

    1 0.716797 0.395833 0.216406 0.147222
    0 0.687109 0.379167 0.255469 0.158333
    1 0.420312 0.395833 0.140625 0.166667
    
  1. Create file train.txt in directory build\darknet\x64\data\, with filenames of your images, each filename in new line, with path relative to darknet.exe, for example containing:
data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
  1. Download pre-trained weights for the convolutional layers and put to the directory build\darknet\x64

  2. Start training by using the command line: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137

    To train on Linux use command: ./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137 (just use ./darknet instead of darknet.exe)

    • (file yolo-obj_last.weights will be saved to the build\darknet\x64\backup\ for each 100 iterations)
    • (file yolo-obj_xxxx.weights will be saved to the build\darknet\x64\backup\ for each 1000 iterations)
    • (to disable Loss-Window use darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show, if you train on computer without monitor like a cloud Amazon EC2)
    • (to see the mAP & Loss-chart during training on remote server without GUI, use command darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map then open URL http://ip-address:8090 in Chrome/Firefox browser)

8.1. For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt or train.txt in obj.data file) and run: darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

  1. After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\backup\

    • After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using: darknet.exe detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights

    (in the original repository https://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterations if(iterations > 1000))

    • Also you can get result earlier than all 45000 iterations.

Note: If during training you see nan values for avg (loss) field - then training goes wrong, but if nan is in some other lines - then training goes well.

Note: If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32.

Note: After training use such command for detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link

How to train tiny-yolo (to detect your custom objects)

Do all the same steps as for the full yolo model as described above. With the exception of:

  • Download file with the first 29-convolutional layers of yolov4-tiny: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29 (Or get this file from yolov4-tiny.weights file by using command: darknet.exe partial cfg/yolov4-tiny-custom.cfg yolov4-tiny.weights yolov4-tiny.conv.29 29
  • Make your custom model yolov4-tiny-obj.cfg based on cfg/yolov4-tiny-custom.cfg instead of yolov4.cfg
  • Start training: darknet.exe detector train data/obj.data yolov4-tiny-obj.cfg yolov4-tiny.conv.29

For training Yolo based on other models (DenseNet201-Yolo or ResNet50-Yolo), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd If you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights.

When should I stop training

Usually sufficient 2000 iterations for each class(object), but not less than number of training images and not less than 6000 iterations in total. But for a more precise definition when you should stop training, use the following manual:

  1. During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:

Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8

9002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds

  • 9002 - iteration number (number of batch)

  • 0.60730 avg - average loss (error) - the lower, the better

    When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final average loss can be from 0.05 (for a small model and easy dataset) to 3.0 (for a big model and a difficult dataset).

    Or if you train with flag -map then you will see mAP indicator Last accuracy [email protected] = 18.50% in the console - this indicator is better than Loss, so train while mAP increases.

  1. Once training is stopped, you should take some of last .weights-files from darknet\build\darknet\x64\backup and choose the best of them:

For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to over-fitting. Over-fitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights from Early Stopping Point:

Over-fitting

To get weights from Early Stopping Point:

2.1. At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.

2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:

(If you use another GitHub repository, then use darknet.exe detector recall... instead of darknet.exe detector map...)

  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights
  • darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights

And compare last output lines for each weights (7000, 8000, 9000):

Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)

For example, bigger mAP gives weights yolo-obj_8000.weights - then use this weights for detection.

Or just train with -map flag:

darknet.exe detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -map

So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt file that is specified in obj.data file (1 Epoch = images_in_train_txt / batch iterations)

(to change the max x-axis value - change max_batches= parameter to 2000*classes, f.e. max_batches=6000 for 3 classes)

loss_chart_map_chart

Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

  • IoU (intersect over union) - average intersect over union of objects and detections for a certain threshold = 0.24

  • mAP (mean average precision) - mean value of average precisions for each class, where average precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.

precision_recall_iou

Custom object detection

Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Yolo_v2_training Yolo_v2_training

How to improve object detection

  1. Before training:
  • set flag random=1 in your .cfg-file - it will increase precision by training Yolo for different resolutions: link

  • increase network resolution in your .cfg-file (height=608, width=608 or any value multiple of 32) - it will increase precision

  • check that each object that you want to detect is mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark

  • my Loss is very high and mAP is very low, is training wrong? Run training with -show_imgs flag at the end of training command, do you see correct bounded boxes of objects (in windows or in files aug_...jpg)? If no - your training dataset is wrong.

  • for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at different: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train 2000*classes iterations or more

  • desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty .txt files) - use as many images of negative samples as there are images with objects

  • What is the best way to mark objects: label only the visible part of the object, or label the visible and overlapped part of the object, or label a little more than the entire object (with a little gap)? Mark as you like - how would you like it to be detected.

  • for training with a large number of objects in each image, add the parameter max=200 or higher value in the last [yolo]-layer or [region]-layer in your cfg-file (the global maximum number of objects that can be detected by YoloV3 is 0,0615234375*(width*height) where are width and height are parameters from [net] section in cfg-file)

  • for training for small objects (smaller than 16x16 after the image is resized to 416x416) - set layers = 23 instead of https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L895

  • for training for both small and large objects use modified models:

  • If you train the model to distinguish Left and Right objects as separate classes (left/right hand, left/right-turn on road signs, ...) then for disabling flip data augmentation - add flip=0 here: https://github.com/AlexeyAB/darknet/blob/3d2d0a7c98dbc8923d9ff705b81ff4f7940ea6ff/cfg/yolov3.cfg#L17

  • General rule - your training dataset should include such a set of relative sizes of objects that you want to detect:

    • train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width
    • train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height

    I.e. for each object from Test dataset there must be at least 1 object in the Training dataset with the same class_id and about the same relative size:

    object width in percent from Training dataset ~= object width in percent from Test dataset

    That is, if only objects that occupied 80-90% of the image were present in the training set, then the trained network will not be able to detect objects that occupy 1-10% of the image.

  • to speedup training (with decreasing detection accuracy) set param stopbackward=1 for layer-136 in cfg-file

  • each: model of object, side, illumination, scale, each 30 grad of the turn and inclination angles - these are different objects from an internal perspective of the neural network. So the more different objects you want to detect, the more complex network model should be used.

  • to make the detected bounded boxes more accurate, you can add 3 parameters ignore_thresh = .9 iou_normalizer=0.5 iou_loss=giou to each [yolo] layer and train, it will increase [email protected], but decrease [email protected].

  • Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so for YOLOv4 the 1st-[yolo]-layer has anchors smaller than 30x30, 2nd smaller than 60x60, 3rd remaining, and vice versa for YOLOv3. Also you should change the filters=(classes + 5)*<number of mask> before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.

  1. After training - for detection:
  • Increase network-resolution by set in your .cfg-file (height=608 and width=608) or (height=832 and width=832) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects: link

  • it is not necessary to train the network again, just use .weights-file already trained for 416x416 resolution

  • to get even greater accuracy you should train with higher resolution 608x608 or 832x832, note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link

How to mark bounded boxes of objects and create annotation files

Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 - v4: https://github.com/AlexeyAB/Yolo_mark

With example of: train.txt, obj.names, obj.data, yolo-obj.cfg, air1-6.txt, bird1-4.txt for 2 classes of objects (air, bird) and train_obj.cmd with example how to train this image-set with Yolo v2 - v4

Different tools for marking objects in images:

  1. in C++: https://github.com/AlexeyAB/Yolo_mark
  2. in Python: https://github.com/tzutalin/labelImg
  3. in Python: https://github.com/Cartucho/OpenLabeling
  4. in C++: https://www.ccoderun.ca/darkmark/
  5. in JavaScript: https://github.com/opencv/cvat
  6. in C++: https://github.com/jveitchmichaelis/deeplabel
  7. in C#: https://github.com/BMW-InnovationLab/BMW-Labeltool-Lite
  8. DL-Annotator for Windows ($30): url
  9. v7labs - the greatest cloud labeling tool ($1.5 per hour): https://www.v7labs.com/

How to use Yolo as DLL and SO libraries

  • on Linux
    • using build.sh or
    • build darknet using cmake or
    • set LIBSO=1 in the Makefile and do make
  • on Windows
    • using build.ps1 or
    • build darknet using cmake or
    • compile build\darknet\yolo_cpp_dll.sln solution or build\darknet\yolo_cpp_dll_no_gpu.sln solution

There are 2 APIs:


  1. To compile Yolo as C++ DLL-file yolo_cpp_dll.dll - open the solution build\darknet\yolo_cpp_dll.sln, set x64 and Release, and do the: Build -> Build yolo_cpp_dll

    • You should have installed CUDA 10.2
    • To use cuDNN do: (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add at the beginning of line: CUDNN;
  2. To use Yolo as DLL-file in your C++ console application - open the solution build\darknet\yolo_console_dll.sln, set x64 and Release, and do the: Build -> Build yolo_console_dll

    • you can run your console application from Windows Explorer build\darknet\x64\yolo_console_dll.exe use this command: yolo_console_dll.exe data/coco.names yolov4.cfg yolov4.weights test.mp4

    • after launching your console application and entering the image file name - you will see info for each object: <obj_id> <left_x> <top_y> <width> <height> <probability>

    • to use simple OpenCV-GUI you should uncomment line //#define OPENCV in yolo_console_dll.cpp-file: link

    • you can see source code of simple example for detection on the video file: link

yolo_cpp_dll.dll-API: link

struct bbox_t {
    unsigned int x, y, w, h;    // (x,y) - top-left corner, (w, h) - width & height of bounded box
    float prob;                    // confidence - probability that the object was found correctly
    unsigned int obj_id;        // class of object - from range [0, classes-1]
    unsigned int track_id;        // tracking id for video (0 - untracked, 1 - inf - tracked object)
    unsigned int frames_counter;// counter of frames on which the object was detected
};

class Detector {
public:
        Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0);
        ~Detector();

        std::vector<bbox_t> detect(std::string image_filename, float thresh = 0.2, bool use_mean = false);
        std::vector<bbox_t> detect(image_t img, float thresh = 0.2, bool use_mean = false);
        static image_t load_image(std::string image_filename);
        static void free_image(image_t m);

#ifdef OPENCV
        std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false);
        std::shared_ptr<image_t> mat_to_image_resize(cv::Mat mat) const;
#endif
};

Citation

@misc{bochkovskiy2020yolov4,
      title={YOLOv4: Optimal Speed and Accuracy of Object Detection}, 
      author={Alexey Bochkovskiy and Chien-Yao Wang and Hong-Yuan Mark Liao},
      year={2020},
      eprint={2004.10934},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@InProceedings{Wang_2021_CVPR,
    author    = {Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
    title     = {{Scaled-YOLOv4}: Scaling Cross Stage Partial Network},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {13029-13038}
}
Comments
  • EfficientNet | Implementation ?

    EfficientNet | Implementation ?

    https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html https://www.youtube.com/watch?v=3svIm5UC94I

    This is good.

    enhancement 
    opened by dexception 218
  • ASFF - Learning Spatial Fusion for Single-Shot Object Detection - 63% mAP@0.5 with 45.5FPS

    ASFF - Learning Spatial Fusion for Single-Shot Object Detection - 63% [email protected] with 45.5FPS

    opened by Kyuuki93 140
  • CSPNet - New models and the most comprehensive comparison of detection models

    CSPNet - New models and the most comprehensive comparison of detection models

    CSPNet: A New Backbone that can Enhance Learning Capability of CNN The most comprehensive comparison of detection models:

    • paper: https://arxiv.org/abs/1911.11929v1
    • models: https://github.com/WongKinYiu/CrossStagePartialNetworks

    New model - this model can be used with this Darknet repository:

    • CSPResNeXt50-PANet-SPP - 512x512 - 38.0% [email protected], 60.0% [email protected] - 1.5x faster than Yolov3-SPP the same [email protected] accuracy and higher [email protected] accuracy:
      • cfg: https://raw.githubusercontent.com/WongKinYiu/CrossStagePartialNetworks/master/cfg/csresnext50-panet-spp.cfg
      • weights: https://drive.google.com/open?id=1Y6vJQf-Vu9O0tB10IUYNttktA-DLp5T1

    It's interesting, that:

    • old Yolov3 416x416 (without SPP) has higher [email protected] and faster than ResNet101-CenterNet 512x512

    • private model CSPPeleeNet - EFM (SAM) 512x512 2x faster with approximately the same accuracy as Yolo v3 320x320

    • our classifier EfficientNet B0 (224x224) 0.9 BFLOPS - 0.45 B_FMA (16ms / RTX 2070), 4.9M params, 71.3% Top1 | 90.4% Top5 has higher accuracy than official EfficientNet B0(official) (224x224) 0.78 BFLOPS - 0.39 B_FMA, 5.3M params, 70.0% Top1 | 88.9% Top5 https://github.com/WongKinYiu/CrossStagePartialNetworks#small-models


    image


    cmp


    FPS is measured by using commands like this: classifier_bench_models.cmd.txt ./darknet classifier demo cfg/imagenet1k_c.data models/csdarknet53.cfg models/csdarknet53.weights test.mp4 -benchmark

    • GPU FPS - Darknet (GPU=1 CUDNN=1 CUDNN_HALF=1) - Ge Force RTX 2070 (Tensor Cores)
    • CPU FPS - Darknet (AVX=1 OPENMP=1) - Intel Core i7 6700K (4 Cores / 8 Logical-cores)

    There are FPS

    • for original network resolution from cfg-file
    • in parentheses (FPS for network resolution 512x512)
    • in square brackets [FPS for network resolution 608x608]

    Conclusion:

    • use filters=256 groups=2and filters=512 groups=8 for 256x256 network resolution
    • use groups=8 for 512x512 network resolution
    • use groups=32 for 608x608 and higher network resolution

    Big Models

    | Model | #Parameter | BFLOPs | Top-1 | Top-5 | cfg | weight | GPU FPS orig (512) [608] | CPU FPS orig (512) [608] | | :---- | :--------: | :----: | :---: | :---: | :-: | :----: | :--: | :--: | | | | | | | | | | CSPDarknet19-fast | - | 2.5 | - | - | cfg | - | 213 (149) [116] | 19..7 (4.6) [3.2] | | Spinenet49 | - | 8.5 | - | - | cfg | - | 49 (44) [43] | 7.6 (2.0) [1.4] | | DarkNet-53 [1] | 41.57M | 18.57 | 77.2 | 93.8 | cfg | weight | 113 (56) [38] | 4.9 (1.1) | | CSPDarkNet-53g | - | 11.03 (-40%) | - | - | cfg | - | 122 (64) [46] | 6.6 (1.5) [1.1] | | CSPDarkNet-53ghr | 9.74M (-76%) | 5.67 (-70%) | - | - | cfg | - | 100 (75) [57] | 8.5 (2.0) [1.5] | | CSPDarkNet-53 | 27.61M (-34%) | 13.07 (-30%) | 77.2 (=) | 93.6 (-0.2) | cfg | weight | 101 (57) [41] | 6.0 (1.3) [1.0] | | CSPDarkNet-53-Elastic | - | 7.74 (-58%) | 76.1 (-1.1) | 93.3 (-0.5) | cfg | weight | 66 | 7.5 | | | | | | | | ResNet-50 [2] | 22.73M | 9.74 | 75.8 | 92.9 | cfg | weight | 135 (73) | 8.6 (2.0) | | CSPResNet-50 | 21.57M (-5%) | 8.97 (-8%) | 76.6 (+0.8) | 93.3 (+0.4) | cfg | weight | 131 (81) | 9.2 (2.2) | | CSPResNet-50-Elastic | - | 9.36 (-4%) | 76.8 (+1.0) | 93.5 (+0.6) | cfg | weight | 77 | 7.8 | | | | | | | | ResNeXt-50 [3] | 22.19M | 10.11 | 77.8 | 94.2 | cfg | weight | 71 (60) | 6.2 (1.5) | | CSPResNeXt-50 | 20.50M (-8%) | 7.93 (-22%) | 77.9 (+0.1) | 94.0 (-0.2) | cfg | weight | 67 (58) [50] | 6.9 (1.8) [1.28] | | CSPResNeXt-50-gpu | 24.93M (+12%) | 9.89 (-2%) | - | - | cfg | - | 102 (66) [49] | 6.9 (1.7) [1.25] | | CSPResNeXt-50-fast | 21.73 (-2%) | 8.81 (-13%) | - | - | cfg | - | 85 (67) [51] | 7.8 (1.9) [1.4] | | CSPResNeXt-50-Elastic | - | 5.45 (-46%) | 77.2 (-0.6) | 93.8 (-0.4) | cfg | weight | 47 | 8.8 | | HarDNet-138s [4] | 35.5M | 13.4 | 77.8 | - | - | - | | | | DenseNet-264-32 [5] | 27.21M | 11.03 | 77.8 | 93.9 | - | - | | | | ResNet-152 [2] | 60.2M | 22.6 | 77.8 | 93.6 | - | - | | | | | | | | | | DenseNet-201-Elastic [6] | 19.48M | 8.77 | 77.9 | 94.0 | - | - | | | | CSPDenseNet-201-Elastic | 20.17M (+4%) | 7.13 (-19%) | 77.9 (=) | 94.0 (=) | - | - | | | | | | | | | | | | Res2NetLite-72 [7] | - | 5.19 | 74.7 | 92.1 | cfg | weight | 102 | 12.3 | | | | | | | | |

    Small Models

    | Model | #Parameter | BFLOPs | Top-1 | Top-5 | cfg | weight | GPU FPS | CPU FPS | | :---- | :--------: | :----: | :---: | :---: | :-: | :----: | :-: | :-: | | | | | | | | | | PeleeNet [8] | 2.79M | 1.017 | 70.7 | 90.0 | - | - | | | | PeleeNet-swish | 2.79M | 1.017 | 71.5 | 90.7 | - | - | | | | PeleeNet-swish-SE | 2.81M | 1.017 | 72.1 | 91.0 | - | - | | | | CSPPeleeNet | 2.83M (+1%) | 0.888 (-13%) | 70.9 (+0.2) | 90.2 (+0.2) | - | - | | | | CSPPeleeNet-swish | 2.83M (+1%) | 0.888 (-13%) | 71.7 (+0.2) | 90.8 (+0.1) | - | - | | | | CSPPeleeNet-swish-SE | 2.85M (+1%) | 0.888 (-13%) | 72.4 (+0.3) | 91.0 (=) | - | - | | | | SparsePeleeNet [9] | 2.39M | 0.904 | 69.6 | 89.3 | - | - | | | | | | | | | | | | EfficientNet-B0* [10] | 4.81M | 0.915 | 71.3 | 90.4 | cfg | weight | 143 | 6.3 | | EfficientNet-B0 (official) [10] | - | - | 70.0 | 88.9 | - | - | | | | | | | | | | | | MobileNet-v2 [11] | 3.47M | 0.858 | 67.0 | 87.7 | cfg | weight | 253 | 7.4 | | CSPMobileNet-v2 | 2.51M (-28%) | 0.764 (-11%) | 67.7 (+0.7) | 88.3 (+0.6) | cfg | weight | 218 | 7.9 | | | | | | | | | | Darknet Ref. [12] | 7.31M | 0.96 | 61.1 | 83.0 | cfg | weight | 511 | 51 | | CSPDenseNet Ref. | 3.48M (-52%) | 0.886 (-8%) | 65.7 (+4.6) | 86.6 (+3.6) | - | - | | | | CSPPeleeNet Ref. | 4.10M (-44%) | 1.103 (+15%) | 68.9 (+7.8) | 88.7 (+5.7) | - | - | | | | CSPDenseNetb Ref. | 1.38M (-81%) | 0.631 (-34%) | 64.2 (+3.1) | 85.5 (+2.5) | - | - | | | | CSPPeleeNetb Ref. | 2.01M (-73%) | 0.897 (-7%) | 67.8 (+6.7) | 88.1 (+5.1) | - | - | | | | | | | | | | | | ResNet-10 [2] | 5.24M | 2.273 | 63.5 | 85.0 | cfg | weight | 425 | 29.2 | | CSPResNet-10 | 2.73M (-48%) | 1.905 (-16%) | 65.3 (+1.8) | 86.5 (+1.5) | - | - | | | | | | | | | | | | MixNet-M-GPU | - | 1.065 | 71.5 | 90.5 | cfg | - | 86 | 4.7 | | MixNet-M | - | 0.759 | - | - | cfg | - | 87 | 3.4 | | GhostNet-1.0 | - | 0.234 | - | - | cfg | - | 62 | 12.2 | | | | | | | | |


    cuda version is V9.0.252 and it was built on 11.19.2017, so maybe the cudnn do not support group convolution well.

    | Model | GPU | 256×256 | 512×512 | 608×608 | | :-- | :-: | :-: | :-: | :-: | | CSPResNeXt50-GPU | Titan X Pascal | 126 | 70 | 57 | | CSPResNeXt50 | Titan X Pascal | 103 | 65 | 55 | | CSPResNeXt50-fast | Titan X Pascal | 117 | 70 | 57 | | CSPDarknet19 | Titan X Pascal | 242 | 144 | 118 | | CSPDarknet53 | Titan X Pascal | 132 | 71 | 56 | | CSPDarknet53-G | Titan X Pascal | 123 | 69 | 56 | | CSPDarknet53-GHR | Titan X Pascal | 126 | 76 | 61 | | Spinenet49 | Titan X Pascal | 73 | 52 | 42 |


    Detector FPS on GeForce RTX 2070 (Tensor Cores):

    • FPS - measured using the command: ./darknet detector demo cfg/coco.data ... -benchmark

    CUDNN_HALF=1 (Mixed-precision is forced for Tensor Cores (if groups==1))

    1. 512x512:

      • yolov3-spp - 52.0 FPS - (--ms )
      • csresnext50-panet-spp - 36.5 FPS - (--ms )
    2. 608x608:

      • yolov3-spp - 38.0 FPS - (--ms )
      • csresnext50-panet-spp - 33.9 FPS (--ms )

    CUDNN_HALF=0

    1. 512x512:

      • yolov3-spp - 41.4 FPS - (--ms)
      • csresnext50-panet-spp - 34.5 FPS - (--ms)
    2. 608x608:

      • yolov3-spp - 26.1 FPS - (--ms)
      • csresnext50-panet-spp - 30.0 FPS (--ms)
    enhancement 
    opened by AlexeyAB 124
  • Repo Claims To Be YOLOv5

    Repo Claims To Be YOLOv5

    Hey there,

    This repo is claiming to be YOLOv5: https://github.com/ultralytics/yolov5

    ~~They released~~ a blog here: https://blog.roboflow.ai/yolov5-is-here/

    It's being discussed on HN here: https://news.ycombinator.com/item?id=23478151

    In all honesty this looks like some bullshit company stole the name, but it would be good to get some proper word on this @AlexeyAB

    opened by danielbarry 92
  • Yolov3 training killed halfway

    Yolov3 training killed halfway

    Hello,

    I'm training darknet to detect my custom objects. The process looks fine without error after loading, and during training. However, after a number of iterations (~1000), the process is killed, pretty randomly, sometimes during an iteration. What would be a possible cause and how it can be solved? Thank you.

    I'm training on a Geforce GTX 1080, with CUDA 9.0, and 27GB CPU RAM:

    Here is my cfg file:

    [net]
    # Testing
    batch=1
    subdivisions=1
    # Training
    #batch=64
    #subdivisions=64
    width=416
    height=416
    channels=3
    momentum=0.9
    decay=0.0005
    angle=0
    saturation = 1.5
    exposure = 1.5
    hue=.1
    
    learning_rate=0.001
    burn_in=1000
    max_batches = 500200
    policy=steps
    steps=400000,450000
    scales=.1,.1
    
    [convolutional]
    batch_normalize=1
    filters=32
    size=3
    stride=1
    pad=1
    activation=leaky
    
    # Downsample
    
    [convolutional]
    batch_normalize=1
    filters=64
    size=3
    stride=2
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=32
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=64
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    # Downsample
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=3
    stride=2
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=64
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=64
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    # Downsample
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=2
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    # Downsample
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=2
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    # Downsample
    
    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=2
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [shortcut]
    from=-3
    activation=linear
    
    ######################
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=1024
    activation=leaky
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=57
    activation=linear
    
    
    [yolo]
    mask = 6,7,8
    anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
    classes=14
    num=9
    jitter=.3
    ignore_thresh = .7
    truth_thresh = 1
    random=1
    
    
    [route]
    layers = -4
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [upsample]
    stride=2
    
    [route]
    layers = -1, 61
    
    
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=512
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=512
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=512
    activation=leaky
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=57
    activation=linear
    
    
    [yolo]
    mask = 3,4,5
    anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
    classes=14
    num=9
    jitter=.3
    ignore_thresh = .7
    truth_thresh = 1
    random=1
    
    
    
    [route]
    layers = -4
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [upsample]
    stride=4
    
    [route]
    layers = -1, 11
    
    
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=256
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=256
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    size=3
    stride=1
    pad=1
    filters=256
    activation=leaky
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=57
    activation=linear
    
    
    [yolo]
    mask = 0,1,2
    anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
    classes=14
    num=9
    jitter=.3
    ignore_thresh = .7
    truth_thresh = 1
    random=1
    max=200
    
    Bug fixed 
    opened by zrion 87
  • Results of the Lastest commit (4892071) and latest cfg files of the YOLO v2

    Results of the Lastest commit (4892071) and latest cfg files of the YOLO v2

    Hello,

    The Yolo authors has changed the YOLO-VOC CFG files in their original website. I mean here:

    2017-08-13_10-51-46

    Training with this new CFG files (with either thresh = 0.2 or 0.001) does not handle the results as when I had trained it using older repo and older YOLO-VOC CFG file. I can say results are way worse than before.

    Besides, training with YOLO-VOC-2.0 handles slightly better results but still it can not compete with older results (with either thresh = 0.2 or 0.001).

    I have used the latest commit of the repo here (4892071). What is the problem? the new CFG files or repo codes?

    opened by MyVanitar 79
  • Crop object detected

    Crop object detected

    hello , How can I use the function "crop_image()" inside the function "draw_detections_cv_v3()" to crop all the object detected on a video and save them in a folder

    enhancement question want enhancement 
    opened by halimB8 64
  • Gaussian YOLOv3 (+3.1% mAP@0.5...0.95 on COCO) , (+3.0% mAP@0.7 on KITTI) , (+3.5% mAP@0.75 on BDD)

    Gaussian YOLOv3 (+3.1% [email protected] on COCO) , (+3.0% [email protected] on KITTI) , (+3.5% [email protected] on BDD)

    Have you tried the Gaussian object detection method?

    • cfg files: https://github.com/jwchoi384/Gaussian_YOLOv3/tree/master/cfg
    • BDD100k weights file: https://drive.google.com/open?id=1Eutnens-3z6o4LYe0PZXJ1VYNwcZ6-2Y
    • paper: https://arxiv.org/abs/1904.04620v2
    • source code: https://github.com/jwchoi384/Gaussian_YOLOv3
    enhancement 
    opened by phucnhs 62
  • what does index & entry_index() in yolo_layer.c  do?

    what does index & entry_index() in yolo_layer.c do?

    Hi may I know what needs to be changed for training with 4-point coordinates labels, rather than xywh?

    I have been trying to edit the current version of YOLO to train labels containing such format: x1,y1,x2,y2,x3,y3,x4,y4 rather than the current xywh format.

    1) what does index & entry_index() in yolo_layer.c do? I understand that the values "i & j" are used in this function wheref "i" is related to truth.x while "j" is related to truth.y. In this case of x1-x4 and y1-y4, will i need j1-4 and i1-4?

    2) Replacing instances of (4+1) with (8+1) in yolo_layer.c I have replaced instances of " int class_id = state.truth[t*(4 + 1) + b*l.truths + 4];" with: "int class_id = state.truth[t*(8 + 1) + b*l.truths + 8];" I have replaced 4 with 8, as there are 8 parameters (excluding class id) for each bounding box instead of the original 4 (xywh). I have also performed the changes on: box truth = float_to_box_stride(state.truth + t*(8 + 1) + b*l.truths, 1); //UPDATED

    Would I also need to replace 4 to 8 for the following functions? static int entry_index(layer l, int batch, int location, int entry) { int n = location / (l.wl.h); int loc = location % (l.wl.h); return batchl.outputs + nl.wl.h**(4+l.classes+1)** + entryl.wl.h + loc; } I have also tried changing the following line: //l.outputs = hwn*(classes + 4 + 1); to l.outputs = hwn*(classes + 8 + 1);

    However, I receive the following error when attempting to run: "Error: l.outputs == params.inputs filters= in the [convolutional]-layer doesn't correspond to classes= or mask= in [yolo]-layer "

    3) Is this the correct method to predict the coordinates for the 4 coordinates of the bounding boxes? (I don't see the connection between the prediction equations in figure 2 of the yolov3 paper being related to the calculations performed in get_yolo_box() or delta_yolo_box(). )

    image

    in get_yolo_box(): of yolo_layer.c I'm no longer using this: b.w = exp(x[index + 2*stride]) * biases[2*n] / w; Instead, I predict the 8 values of the 4 coordinates (except of my code is shown below: ie. b.x1 = (i + x[index + 0*stride]) / lw; stored in *x --> x[] b.y1 = (j + x[index + 1*stride]) / lh; stored in *x --> x[] b.x2 = (i + x[index + 2*stride]) / lw; stored in *x --> x[] b.y2 = (j + x[index + 3*stride]) / lh;

    Also in delta_yolo_box() of yolo_layer.c: i'm no longer using this: float tw = log(truth.ww / biases[2*n]); Instead, I predict the 8 values of the 4 coordinates (except of my code is shown below): float tx1 = (truth.x1lw - i); float ty1 = (truth.y1lh - j); float tx2 = (truth.x2lw - i); float ty2 = (truth.y2*lh - j);

    delta[index + 0*stride] = scale * (tx1 - x[index + 0*stride]); 
    delta[index + 1*stride] = scale * (ty1 - x[index + 1*stride]);
    delta[index + 2*stride] = scale * (tx2 - x[index + 2*stride]); 
    delta[index + 3*stride] = scale * (ty2 - x[index + 3*stride]);
    

    Thank you.

    *Thus far, I have made changes to mainly data.c (handling the reading of new label format), yolo_layer.c (for predictions) and box.c (for computation of IOU).

    opened by JohnWnx 57
  • Extremely inaccurate reading.

    Extremely inaccurate reading.

    I been trying to train YOLO to read image of fast food trash, but YOLO constantly makes prediction in places that doesn't really make any sense whatsoever. And the more I train, the more crazier YOLO becomes. Overload OverPie This is with an avg loss of 0.09 and over 4000+ iteration.

    I do have to mention I used 128 x 128 image to train, but this issue still pops up when I used high resolution image.

    question 
    opened by Soccer9001 55
  • yolov3-tiny_xnor.cfg running on ARM

    yolov3-tiny_xnor.cfg running on ARM

    Hi @AlexeyAB,

    I am trying to run yolov3-tiny_xnor.cfg for detection in a raspberry pi. I have trained the network, tested it on an Intel-based system and it just works fine. However, when I run it on the RPi, nothing is detected! I am using the very same command and the very same version of the framework on both sides. Can you help me figure out what is going on?

    I am using the command ./darknet detector test data/coco.data cfg/yolov3-tiny_xnor.cfg yolov3-tiny_xnor_last.weigths data/person.jpg

    The content of coco.data is

    classes = 80
    names   = data/coco/coco.names
    backup  = backup/
    

    The content of yolov3-tiny_xnor.cfg

    [net]
    # Testing
    batch=1
    subdivisions=1
    # Training
    # batch=64
    # subdivisions=2
    width=416
    height=416
    channels=3
    momentum=0.9
    decay=0.0005
    angle=0
    saturation = 1.5
    exposure = 1.5
    hue=.1
    
    learning_rate=0.001
    burn_in=1000
    max_batches = 500200
    policy=steps
    steps=400000,450000
    scales=.1,.1
    
    [convolutional]
    batch_normalize=1
    filters=16
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=2
    
    [convolutional]
    xnor=1
    bin_output=1
    batch_normalize=1
    filters=32
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=2
    
    [convolutional]
    xnor=1
    bin_output=1
    batch_normalize=1
    filters=64
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=2
    
    [convolutional]
    xnor=1
    bin_output=1
    batch_normalize=1
    filters=128
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=2
    
    [convolutional]
    xnor=1
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=2
    
    [convolutional]
    xnor=1
    bin_output=1
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [maxpool]
    size=2
    stride=1
    
    [convolutional]
    xnor=1
    bin_output=1
    batch_normalize=1
    filters=1024
    size=3
    stride=1
    pad=1
    activation=leaky
    
    ###########
    
    [convolutional]
    xnor=1
    batch_normalize=1
    filters=256
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    batch_normalize=1
    filters=512
    size=3
    stride=1
    pad=1
    activation=leaky
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=255
    activation=linear
    
    
    
    [yolo]
    mask = 3,4,5
    anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
    classes=80
    num=6
    jitter=.3
    ignore_thresh = .7
    truth_thresh = 1
    random=1
    
    [route]
    layers = -4
    
    [convolutional]
    xnor=1
    batch_normalize=1
    filters=128
    size=1
    stride=1
    pad=1
    activation=leaky
    
    [upsample]
    stride=2
    
    [route]
    layers = -1, 8
    
    [convolutional]
    xnor=1
    batch_normalize=1
    filters=256
    size=3
    stride=1
    pad=1
    activation=leaky
    
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=255
    activation=linear
    
    [yolo]
    mask = 0,1,2
    anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
    classes=80
    num=6
    jitter=.3
    ignore_thresh = .7
    truth_thresh = 1
    random=1
    

    The .weights file can be found here: http://www.mediafire.com/file/vahpux9xefw1tci/yolov3-tiny_xnor_122000.weights

    Finally, the person.jpg image is the one already present in the data folder.

    Bug fixed 
    opened by joaomiguelvieira 53
  • After how many iteration it saves the weights ? from where we can change it?

    After how many iteration it saves the weights ? from where we can change it?

    If you have an issue with training - no-detections / Nan avg-loss / low accuracy: * read FAQ: https://github.com/AlexeyAB/darknet/wiki/FAQ---frequently-asked-questions * what command do you use? * what dataset do you use?
    * what Loss and mAP did you get? * show chart.png with Loss and mAP
    * check your dataset - run training with flag -show_imgs i.e. ./darknet detector train ... -show_imgs and look at the aug_...jpg images, do you see correct truth bounded boxes? * rename your cfg-file to txt-file and drag-n-drop (attach) to your message here * show content of generated files bad.list and bad_label.list if they exist * Read How to train (to detect your custom objects) and How to improve object detection in the Readme: https://github.com/AlexeyAB/darknet/blob/master/README.md * show such screenshot with info

    ./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights data/dog.jpg
     CUDA-version: 10000 (10000), cuDNN: 7.4.2, CUDNN_HALF=1, GPU count: 1
     CUDNN_HALF=1
     OpenCV version: 4.2.0
     0 : compute_capability = 750, cudnn_half = 1, GPU: GeForce RTX 2070
    net.optimized_memory = 0
    mini_batch = 1, batch = 8, time_steps = 1, train = 0
       layer   filters  size/strd(dil)      input                output
    
    Training issue 
    opened by ayush431 1
  • how to train use yolov4-csp-x-mish.cfg via darknet.

    how to train use yolov4-csp-x-mish.cfg via darknet.

    Hi Alexey @AlexeyAB:

    I am trying to train a Custom Dataset using "YOLOv4-csp-x-mish.cfg".

    The modified part of cfg is as follows. [net] max_batches = 33000 steps= 26400 (80%), 29700 (90%) [yolo] classes = 11 (11 of my Custom Dataset classes) Change filters to 48 in the [convolutional] item above the [yolo] item

    The pre-trained weight file used the following weights. "yolov4-csp-x-swish.weights" (couldn't find weights like cfg)

    training result Looking at the graph below, iteration was only 900 for a total of 560,000 images, but the avg loss was lowered. Image Detection TEST does not find anything. image

    darknet.exe detector train data/navis_obj.data cfg/yolov4-csp-x-mish_custom_navis.cfg yolov4x-mish.weights -map

    Questions

    1. Is it because the pre-trained weight I used is wrong and it is not detected?
    2. Is the fix in cfg wrong?

    thanks.

    Training issue 
    opened by NaughtyJune 0
  • training time on V100 is slower than 3080Ti

    training time on V100 is slower than 3080Ti

    Hi Alexey @AlexeyAB :

    I got a problem for training the question is: training time on V100 is slower than 3080Ti, using the same conf and Makefile training uses mix-precision, I set CUDNN=1 in Makefile and loss_scale=128.0 in conf training one batch on V100 costs 13~15 sec, while on 3080Ti it is less than 6 sec (almost 3 times faster than V100). on both training server GPU-Util almost 100%

    the following are the conf on 3080Ti(12G): Screen Shot 2022-12-01 at 12 52 20 PM

    the conf on V100 (32G) is the same as 3080Ti, except the "subdivisions" is set to 16 due to bigger memory on GPU

    and Makefile (the same on both training servers): Screen Shot 2022-12-01 at 12 52 45 PM

    I set DEBUG=0, which helps to enhance the training speed, but still the time on V100 is higher than 3080Ti

    Do you have any suggestions?

    opened by raymond1123 0
  • Multi-GPU Training Issue

    Multi-GPU Training Issue

    I'm trying custom training in multi-GPU environment. I have a question regrading the amount of used images for training at the same time.

    My training environment is below.

    • DGX V100(Tesla V100(32G) x4)
    • Ubuntu 20.04
    • YOLOv4

    Common condition is to set the batch size= 64, subdivision=8 I has increased the number of GPUs(#0~3) and then the number of images was increased 64, 128, 192, 256 per each iteration.

    However, the number of processed images was diffierent at the same time.

    1. When I set "-gpus 0", one iteration was prosecced. -The number of used images increased in increments of 64 : 64 images -> 128 images -> 192 images -> 256 images

    2. When I set "-gpus 0,1", two iterations are prosecced at the same time. -The number of used images increased in increments of 256 : 256 images -> 512 images -> 768 images -> 1024 images

    3. When I set "-gpus 0,1,2", three iterations are prosecced at the same time. -The number of used images increased in increments of 576 : 576 images -> 1152 images -> 1728 images -> 2304 images

    4. When I set "-gpus 0,1,2,3", four iterations are prosecced at the same time. -The number used image increased in increments of 1024 : 1024 images -> 2048 images -> 3072 images -> 4096 images

    Were there some gradient accumulation rule in multi-gpu environmnet?

    Thanks in advance.

    Training issue 
    opened by richard3333p 0
  • ELAN - Designing Network Design Strategies Through Gradient Path Analysis

    ELAN - Designing Network Design Strategies Through Gradient Path Analysis

    Designing Network Design Strategies Through Gradient Path Analysis: https://arxiv.org/abs/2211.04800

    ELAN network is +1.9% AP more accurate and faster than YOLOR Object Detector.

    FiDLUcYWQAE0XQZ


    While YOLOR was the best on COCO dataset and Waymo self-driving dataset in speed/accuracy even 1 year after release: https://waymo.com/open/challenges/2021/real-time-2d-prediction/

    YOLOR: https://arxiv.org/abs/2105.04206 YOLOR on Waymo: https://arxiv.org/abs/2106.08713

    image

    opened by AlexeyAB 0
Releases(yolov4)
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 2, 2023
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4

YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4. YOLTv4 is designed to detect objects in aerial or satellite imagery in arbitrarily large images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 161 Jan 6, 2023
This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Skin Lesion detection using YOLO This project deal

Lalith Veerabhadrappa Badiger 1 Nov 22, 2021
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

null 9 Dec 21, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

null 26 Dec 13, 2022
Yolo ros - YOLO-ROS for HUAWEI ATLAS200

YOLO-ROS YOLO-ROS for NVIDIA YOLO-ROS for HUAWEI ATLAS200, please checkout for b

ChrisLiu 5 Oct 18, 2022
An Unsupervised Detection Framework for Chinese Jargons in the Darknet

An Unsupervised Detection Framework for Chinese Jargons in the Darknet This repo is the Python 3 implementation of 《An Unsupervised Detection Framewor

null 7 Nov 8, 2022
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

OpenVINO Inference API This is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operati

BMW TechOffice MUNICH 68 Nov 24, 2022
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy  N. Monday 3 Feb 15, 2022
Object detection using yolo-tiny model and opencv used as backend

Object detection Algorithm used : Yolo algorithm Backend : opencv Library required: opencv = 4.5.4-dev' Quick Overview about structure 1) main.py Load

null 2 Jul 6, 2022
Object detection (YOLO) with pytorch, OpenCV and python

Real Time Object/Face Detection Using YOLO-v3 This project implements a real time object and face detection using YOLO algorithm. You only look once,

null 1 Aug 4, 2022
Real Time Object Detection and Classification using Yolo Algorithm.

Real time Object detection & Classification using YOLO algorithm. Real Time Object Detection and Classification using Yolo Algorithm. What is Object D

Ketan Chawla 1 Apr 17, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 3, 2021
Implementation for the paper 'YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs'

YOLO-ReT This is the original implementation of the paper: YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs. Prakhar Ganesh, Ya

null 69 Oct 19, 2022
Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions

Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions Accepted by AAAI 2022 [arxiv] Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jia

liuwenyu 245 Dec 16, 2022
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Thomas Dunlap 2 Feb 18, 2022
YOLOv4-v3 Training Automation API for Linux

This repository allows you to get started with training a state-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset or label your dataset using our BMW-LabelTool-Lite and you can start the training right away and monitor it in many different ways like TensorBoard or a custom REST API and GUI. NoCode training with YOLOv4 and YOLOV3 has never been so easy.

BMW TechOffice MUNICH 626 Dec 31, 2022