Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Overview

Intro

Build Status codecov

Real-time object detection and classification. Paper: version 1, version 2.

Read more about YOLO (in darknet) and download weight files here. In case the weight file cannot be found, I uploaded some of mine here, which include yolo-full and yolo-tiny of v1.0, tiny-yolo-v1.1 of v1.1 and yolo, tiny-yolo-voc of v2.

See demo below or see on this imgur

Dependencies

Python3, tensorflow 1.0, numpy, opencv 3.

Citation

@article{trieu2018darkflow,
  title={Darkflow},
  author={Trieu, Trinh Hoang},
  journal={GitHub Repository. Available online: https://github. com/thtrieu/darkflow (accessed on 14 February 2019)},
  year={2018}
}

Getting started

You can choose one of the following three ways to get started with darkflow.

  1. Just build the Cython extensions in place. NOTE: If installing this way you will have to use ./flow in the cloned darkflow directory instead of flow as darkflow is not installed globally.

    python3 setup.py build_ext --inplace
    
  2. Let pip install darkflow globally in dev mode (still globally accessible, but changes to the code immediately take effect)

    pip install -e .
    
  3. Install with pip globally

    pip install .
    

Update

Android demo on Tensorflow's here

I am looking for help:

  • help wanted labels in issue track

Parsing the annotations

Skip this if you are not training or fine-tuning anything (you simply want to forward flow a trained net)

For example, if you want to work with only 3 classes tvmonitor, person, pottedplant; edit labels.txt as follows

tvmonitor
person
pottedplant

And that's it. darkflow will take care of the rest. You can also set darkflow to load from a custom labels file with the --labels flag (i.e. --labels myOtherLabelsFile.txt). This can be helpful when working with multiple models with different sets of output labels. When this flag is not set, darkflow will load from labels.txt by default (unless you are using one of the recognized .cfg files designed for the COCO or VOC dataset - then the labels file will be ignored and the COCO or VOC labels will be loaded).

Design the net

Skip this if you are working with one of the original configurations since they are already there. Otherwise, see the following example:

...

[convolutional]
batch_normalize = 1
size = 3
stride = 1
pad = 1
activation = leaky

[maxpool]

[connected]
output = 4096
activation = linear

...

Flowing the graph using flow

# Have a look at its options
flow --h

First, let's take a closer look at one of a very useful option --load

# 1. Load tiny-yolo.weights
flow --model cfg/tiny-yolo.cfg --load bin/tiny-yolo.weights

# 2. To completely initialize a model, leave the --load option
flow --model cfg/yolo-new.cfg

# 3. It is useful to reuse the first identical layers of tiny for `yolo-new`
flow --model cfg/yolo-new.cfg --load bin/tiny-yolo.weights
# this will print out which layers are reused, which are initialized

All input images from default folder sample_img/ are flowed through the net and predictions are put in sample_img/out/. We can always specify more parameters for such forward passes, such as detection threshold, batch size, images folder, etc.

# Forward all images in sample_img/ using tiny yolo and 100% GPU usage
flow --imgdir sample_img/ --model cfg/tiny-yolo.cfg --load bin/tiny-yolo.weights --gpu 1.0

json output can be generated with descriptions of the pixel location of each bounding box and the pixel location. Each prediction is stored in the sample_img/out folder by default. An example json array is shown below.

# Forward all images in sample_img/ using tiny yolo and JSON output.
flow --imgdir sample_img/ --model cfg/tiny-yolo.cfg --load bin/tiny-yolo.weights --json

JSON output:

[{"label":"person", "confidence": 0.56, "topleft": {"x": 184, "y": 101}, "bottomright": {"x": 274, "y": 382}},
{"label": "dog", "confidence": 0.32, "topleft": {"x": 71, "y": 263}, "bottomright": {"x": 193, "y": 353}},
{"label": "horse", "confidence": 0.76, "topleft": {"x": 412, "y": 109}, "bottomright": {"x": 592,"y": 337}}]
  • label: self explanatory
  • confidence: somewhere between 0 and 1 (how confident yolo is about that detection)
  • topleft: pixel coordinate of top left corner of box.
  • bottomright: pixel coordinate of bottom right corner of box.

Training new model

Training is simple as you only have to add option --train. Training set and annotation will be parsed if this is the first time a new configuration is trained. To point to training set and annotations, use option --dataset and --annotation. A few examples:

# Initialize yolo-new from yolo-tiny, then train the net on 100% GPU:
flow --model cfg/yolo-new.cfg --load bin/tiny-yolo.weights --train --gpu 1.0

# Completely initialize yolo-new and train it with ADAM optimizer
flow --model cfg/yolo-new.cfg --train --trainer adam

During training, the script will occasionally save intermediate results into Tensorflow checkpoints, stored in ckpt/. To resume to any checkpoint before performing training/testing, use --load [checkpoint_num] option, if checkpoint_num < 0, darkflow will load the most recent save by parsing ckpt/checkpoint.

# Resume the most recent checkpoint for training
flow --train --model cfg/yolo-new.cfg --load -1

# Test with checkpoint at step 1500
flow --model cfg/yolo-new.cfg --load 1500

# Fine tuning yolo-tiny from the original one
flow --train --model cfg/tiny-yolo.cfg --load bin/tiny-yolo.weights

Example of training on Pascal VOC 2007:

# Download the Pascal VOC dataset:
curl -O https://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
tar xf VOCtest_06-Nov-2007.tar

# An example of the Pascal VOC annotation format:
vim VOCdevkit/VOC2007/Annotations/000001.xml

# Train the net on the Pascal dataset:
flow --model cfg/yolo-new.cfg --train --dataset "~/VOCdevkit/VOC2007/JPEGImages" --annotation "~/VOCdevkit/VOC2007/Annotations"

Training on your own dataset

The steps below assume we want to use tiny YOLO and our dataset has 3 classes

  1. Create a copy of the configuration file tiny-yolo-voc.cfg and rename it according to your preference tiny-yolo-voc-3c.cfg (It is crucial that you leave the original tiny-yolo-voc.cfg file unchanged, see below for explanation).

  2. In tiny-yolo-voc-3c.cfg, change classes in the [region] layer (the last layer) to the number of classes you are going to train for. In our case, classes are set to 3.

    ...
    
    [region]
    anchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52
    bias_match=1
    classes=3
    coords=4
    num=5
    softmax=1
    
    ...
  3. In tiny-yolo-voc-3c.cfg, change filters in the [convolutional] layer (the second to last layer) to num * (classes + 5). In our case, num is 5 and classes are 3 so 5 * (3 + 5) = 40 therefore filters are set to 40.

    ...
    
    [convolutional]
    size=1
    stride=1
    pad=1
    filters=40
    activation=linear
    
    [region]
    anchors = 1.08,1.19,  3.42,4.41,  6.63,11.38,  9.42,5.11,  16.62,10.52
    
    ...
  4. Change labels.txt to include the label(s) you want to train on (number of labels should be the same as the number of classes you set in tiny-yolo-voc-3c.cfg file). In our case, labels.txt will contain 3 labels.

    label1
    label2
    label3
    
  5. Reference the tiny-yolo-voc-3c.cfg model when you train.

    flow --model cfg/tiny-yolo-voc-3c.cfg --load bin/tiny-yolo-voc.weights --train --annotation train/Annotations --dataset train/Images

  • Why should I leave the original tiny-yolo-voc.cfg file unchanged?

    When darkflow sees you are loading tiny-yolo-voc.weights it will look for tiny-yolo-voc.cfg in your cfg/ folder and compare that configuration file to the new one you have set with --model cfg/tiny-yolo-voc-3c.cfg. In this case, every layer will have the same exact number of weights except for the last two, so it will load the weights into all layers up to the last two because they now contain different number of weights.

Camera/video file demo

For a demo that entirely runs on the CPU:

flow --model cfg/yolo-new.cfg --load bin/yolo-new.weights --demo videofile.avi

For a demo that runs 100% on the GPU:

flow --model cfg/yolo-new.cfg --load bin/yolo-new.weights --demo videofile.avi --gpu 1.0

To use your webcam/camera, simply replace videofile.avi with keyword camera.

To save a video with predicted bounding box, add --saveVideo option.

Using darkflow from another python application

Please note that return_predict(img) must take an numpy.ndarray. Your image must be loaded beforehand and passed to return_predict(img). Passing the file path won't work.

Result from return_predict(img) will be a list of dictionaries representing each detected object's values in the same format as the JSON output listed above.

from darkflow.net.build import TFNet
import cv2

options = {"model": "cfg/yolo.cfg", "load": "bin/yolo.weights", "threshold": 0.1}

tfnet = TFNet(options)

imgcv = cv2.imread("./sample_img/sample_dog.jpg")
result = tfnet.return_predict(imgcv)
print(result)

Save the built graph to a protobuf file (.pb)

## Saving the lastest checkpoint to protobuf file
flow --model cfg/yolo-new.cfg --load -1 --savepb

## Saving graph and weights to protobuf file
flow --model cfg/yolo.cfg --load bin/yolo.weights --savepb

When saving the .pb file, a .meta file will also be generated alongside it. This .meta file is a JSON dump of everything in the meta dictionary that contains information nessecary for post-processing such as anchors and labels. This way, everything you need to make predictions from the graph and do post processing is contained in those two files - no need to have the .cfg or any labels file tagging along.

The created .pb file can be used to migrate the graph to mobile devices (JAVA / C++ / Objective-C++). The name of input tensor and output tensor are respectively 'input' and 'output'. For further usage of this protobuf file, please refer to the official documentation of Tensorflow on C++ API here. To run it on, say, iOS application, simply add the file to Bundle Resources and update the path to this file inside source code.

Also, darkflow supports loading from a .pb and .meta file for generating predictions (instead of loading from a .cfg and checkpoint or .weights).

## Forward images in sample_img for predictions based on protobuf file
flow --pbLoad built_graph/yolo.pb --metaLoad built_graph/yolo.meta --imgdir sample_img/

If you'd like to load a .pb and .meta file when using return_predict() you can set the "pbLoad" and "metaLoad" options in place of the "model" and "load" options you would normally set.

That's all.

Comments
  • Guideline to train against other datasets with different classes

    Guideline to train against other datasets with different classes

    A guideline to train against other datasets such as the udacity self driving dataset would be much appreciated.

    Do I create a labels.txt in the root folder, and specify a model name outside of coco_models and voc_models listed in darkflow/misc.py?

    opened by y22ma 46
  • AssertionError: Cannot capture source

    AssertionError: Cannot capture source

    I'm running on docker via nvidia-docker, the GPU should be correctly configured:

    root@5ca781228ff4:/darkflow/darkflow# nvidia-smi
    Mon Apr 10 07:54:49 2017       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
    | N/A   34C    P8    17W / 125W |      0MiB /  4036MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    

    and I get

    root@5ca781228ff4:/darkflow/darkflow# ./flow --model cfg/yolo.cfg --load bin/yolo.weights --demo samples/video_1.avi --gpu .5
    Parsing ./cfg/yolo.cfg
    Parsing cfg/yolo.cfg
    Loading bin/yolo.weights ...
    Successfully identified 269862452 bytes
    Finished in 0.01864767074584961s
    Model has a coco model name, loading coco labels.
    
    Building net ...
    Source | Train? | Layer description                | Output size
    -------+--------+----------------------------------+---------------
           |        | input                            | (?, 416, 416, 3)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 416, 416, 32)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 208, 208, 32)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 208, 208, 64)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 104, 104, 64)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 104, 104, 128)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 104, 104, 64)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 104, 104, 128)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 52, 52, 128)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 52, 52, 256)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 52, 52, 128)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 52, 52, 256)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 26, 26, 256)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 26, 26, 512)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 26, 26, 256)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 26, 26, 512)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 26, 26, 256)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 26, 26, 512)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 13, 13, 512)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 13, 13, 512)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | conv 1x1p0_1  +bnorm  leaky      | (?, 13, 13, 512)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | concat [16]                      | (?, 26, 26, 512)
     Load  |  Yep!  | local flatten 2x2                | (?, 13, 13, 2048)
     Load  |  Yep!  | concat [26, 24]                  | (?, 13, 13, 3072)
     Load  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Load  |  Yep!  | conv 1x1p0_1    linear           | (?, 13, 13, 425)
    -------+--------+----------------------------------+---------------
    GPU mode with 0.5 usage
    2017-04-10 07:53:24.473692: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    2017-04-10 07:53:24.473754: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-04-10 07:53:24.473782: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    2017-04-10 07:53:24.527216: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2017-04-10 07:53:24.527519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with properties: 
    name: GRID K520
    major: 3 minor: 0 memoryClockRate (GHz) 0.797
    pciBusID 0000:00:03.0
    Total memory: 3.94GiB
    Free memory: 3.91GiB
    2017-04-10 07:53:24.527570: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 
    2017-04-10 07:53:24.527607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y 
    2017-04-10 07:53:24.527656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0)
    Finished in 2.0522050857543945s
    
    Press [ESC] to quit demo
    Traceback (most recent call last):
      File "./flow", line 52, in <module>
        tfnet.camera(FLAGS.demo, FLAGS.saveVideo)
      File "/darkflow/darkflow/net/help.py", line 78, in camera
        'Cannot capture source'
    AssertionError: Cannot capture source
    

    The run command is as usual done attaching the driver and the device as device0:

    docker run -it --device=/dev/nvidiactl --device=/dev/nvidia-uvm --device=/dev/nvidia0 --volume-driver nvidia-docker -v nvidia_driver_367.57:/usr/local/nvidia:ro $IMAGE $CMD
    
    opened by loretoparisi 36
  • Assertion error: over-read yolo.weights

    Assertion error: over-read yolo.weights

    I tried to run ./flow --test test/ --model cfg/yolo.cfg --load yolo.weights --gpu 1.0

    I get loading yolo.weights .... and then assertion error: Over-read yolo.weights.

    However, when I run the same command with tiny-yolo.cfg and tiny-yolo.weights it works. I downloaded the weights file from https://pjreddie.com/darknet/yolo/

    opened by ramarajan09 35
  • How do I evaluate accuracy of the test set?

    How do I evaluate accuracy of the test set?

    One way is to calculate "Mean average precision: mAP". But I'm not aware of this feature implemented in darkflow. Do you have any suggestions on which darknet repo or person has written a script to do this?

    opened by off99555 34
  • AssertionError: expect 64701556 bytes, found 180357512

    AssertionError: expect 64701556 bytes, found 180357512

    Apologise if this is not an issue and rather me! I get the following error when I run this command: ./flow --model cfg/tiny-yolo.cfg --load bin/yolo-tiny.weights

    /Users/localadmin/Downloads/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/yolo-tiny.cfg not found, use cfg/tiny-yolo.cfg instead cfg_path, FLAGS.model)) Parsing cfg/tiny-yolo.cfg Loading bin/yolo-tiny.weights ... Traceback (most recent call last): File "./flow", line 45, in tfnet = TFNet(FLAGS) File "/Users/localadmin/Downloads/darkflow-master/darkflow/net/build.py", line 55, in init darknet = Darknet(FLAGS) File "/Users/localadmin/Downloads/darkflow-master/darkflow/dark/darknet.py", line 27, in init self.load_weights() File "/Users/localadmin/Downloads/darkflow-master/darkflow/dark/darknet.py", line 82, in load_weights wgts_loader = loader.create_loader(*args) File "/Users/localadmin/Downloads/darkflow-master/darkflow/utils/loader.py", line 105, in create_loader return load_type(path, cfg) File "/Users/localadmin/Downloads/darkflow-master/darkflow/utils/loader.py", line 19, in init self.load(*args) File "/Users/localadmin/Downloads/darkflow-master/darkflow/utils/loader.py", line 77, in load walker.offset, walker.size) AssertionError: expect 64701556 bytes, found 180357512

    Also, as I'm new to machine learning would you point to me any good article on how to create new cfg file and how to generate weight file from scratch, I can't find this online!

    opened by mothman1 27
  • ResourceExhaustedError

    ResourceExhaustedError

    I just updated to the latest commit with the new clihandler stuffs, and i got error when training. Before this commit my training worked fine.

    I tried: -lowering and increasing gpu usage => doesn't work -decreasing batch size => doesn't work

    python3.5 flow --train --model cfg/yolo-sak.cfg --load bin/yolo.weights --gpu 0.9 --trainer adam --dataset Fire/ --annotation AnnotationFire/ --batch 32 --epoch 60 --save 22040

    ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[32,304,304,64] [[Node: gradients/4-leaky_grad/GreaterEqual = GreaterEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](mul_1, BiasAdd_1)]] [[Node: mul_31/_117 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_6011_mul_31", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

    Please help

    opened by borasy 26
  • New Dataset Training Error -

    New Dataset Training Error - "AttributeError: 'NoneType' object has no attribute 'shape'"

    Hi,

    I am training a new dataset. However, the training always runs for a few steps and suddenly encounters the following error: "AttributeError: 'NoneType' object has no attribute 'shape'". I think that the annotation format and filename in the Annotation file are correct as the training is able to run for a few steps and am running of ideas on how to troubleshoot further.

    Appreciate any ideas or help on this.

    Thank you.

    root@dd84391fd870:/ml/darkflow# flow --model cfg/tiny-yolo-new.cfg --train --dataset "../data/new/JPEGImages" --annotation "../data/new/Annotations"
    
    Parsing cfg/tiny-yolo-new.cfg
    Loading None ...
    Finished in 0.00011324882507324219s
    
    Building net ...
    Source | Train? | Layer description                | Output size
    -------+--------+----------------------------------+---------------
           |        | input                            | (?, 416, 416, 3)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 416, 416, 16)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 208, 208, 16)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 208, 208, 32)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 104, 104, 32)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 104, 104, 64)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 52, 52, 64)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 52, 52, 128)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 26, 26, 128)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 26, 26, 256)
     Load  |  Yep!  | maxp 2x2p0_2                     | (?, 13, 13, 256)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 512)
     Load  |  Yep!  | maxp 2x2p0_1                     | (?, 13, 13, 512)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Init  |  Yep!  | conv 3x3p1_1  +bnorm  leaky      | (?, 13, 13, 1024)
     Init  |  Yep!  | conv 1x1p0_1    linear           | (?, 13, 13, 40)
    -------+--------+----------------------------------+---------------
    Running entirely on CPU
    cfg/tiny-yolo-new.cfg loss hyper-parameters:
    	H       = 13
    	W       = 13
    	box     = 5
    	classes = 3
    	scales  = [1.0, 5.0, 1.0, 1.0]
    Building cfg/tiny-yolo-new.cfg loss
    Building cfg/tiny-yolo-new.cfg train op
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    Finished in 7.207982778549194s
    
    Enter training ...
    
    cfg/tiny-yolo-new.cfg parsing ../data/new/Annotations
    Parsing for ['tank', 'truck', 'apc'] 
    [====================>]100%  000296.xml
    Statistics:
    apc: 48
    tank: 70
    truck: 24
    Dataset size: 130
    Dataset of 130 instance(s)
    Training statistics: 
    	Learning rate : 1e-05
    	Batch size    : 16
    	Epoch number  : 1000
    	Backup every  : 2000
    step 1 - loss 106.16172790527344 - moving ave loss 106.16172790527344
    step 2 - loss 106.1773681640625 - moving ave loss 106.16329193115234
    step 3 - loss 106.09341430664062 - moving ave loss 106.15630416870117
    step 4 - loss 106.24054718017578 - moving ave loss 106.16472846984863
    step 5 - loss 106.12216186523438 - moving ave loss 106.1604718093872
    step 6 - loss 106.24075317382812 - moving ave loss 106.1684999458313
    Traceback (most recent call last):
      File "/usr/local/bin/flow", line 6, in <module>
        cliHandler(sys.argv)
      File "/usr/local/lib/python3.5/dist-packages/darkflow/cli.py", line 29, in cliHandler
        print('Enter training ...'); tfnet.train()
      File "/usr/local/lib/python3.5/dist-packages/darkflow/net/flow.py", line 37, in train
        for i, (x_batch, datum) in enumerate(batches):
      File "/usr/local/lib/python3.5/dist-packages/darkflow/net/yolo/data.py", line 113, in shuffle
        inp, new_feed = self._batch(train_instance)
      File "/usr/local/lib/python3.5/dist-packages/darkflow/net/yolov2/data.py", line 27, in _batch
        img = self.preprocess(path, allobj)
      File "/usr/local/lib/python3.5/dist-packages/darkflow/net/yolo/predict.py", line 61, in preprocess
        result = imcv2_affine_trans(im)
      File "/usr/local/lib/python3.5/dist-packages/darkflow/utils/im_transform.py", line 19, in imcv2_affine_trans
        h, w, c = im.shape
    AttributeError: 'NoneType' object has no attribute 'shape'
    
    
    opened by wendq86 23
  • No module named cy_yolo_findboxes

    No module named cy_yolo_findboxes

    when I run test , like ./flow --model jason/yolo2_voc2007_2017.cfg --load jason/yolo-voc.weights --test sample_img/ shows: Traceback (most recent call last): File "./flow", line 4, in <module> from darkflow.cli import cliHandler File "/home/jason/code/darkflow/darkflow/cli.py", line 3, in <module> from darkflow.net.build import TFNet File "/home/jason/code/darkflow/darkflow/net/build.py", line 7, in <module> from .framework import create_framework File "/home/jason/code/darkflow/darkflow/net/framework.py", line 1, in <module> from . import yolo File "/home/jason/code/darkflow/darkflow/net/yolo/__init__.py", line 2, in <module> from . import predict File "/home/jason/code/darkflow/darkflow/net/yolo/predict.py", line 6, in <module> from darkflow.cython_utils.cy_yolo_findboxes import yolo_box_constructor ImportError: No module named cy_yolo_findboxes

    there has cy_yolo_findboxes.pyx file in cython_utils directory. how to sovle? thanks

    opened by myBestLove 23
  • AssertionError: bin/yolo-tiny.weights not found

    AssertionError: bin/yolo-tiny.weights not found

    I have installed the darkflow, but when trying to train it on the yolo tiny weights i get this:

    `Traceback (most recent call last):
      File "flow", line 6, in <module>
        cliHandler(sys.argv)
    
    File "/Users/CWT/Downloads/darkflow-master/darkflow/cli.py", line 22, in cliHandler
        tfnet = TFNet(FLAGS)
    
    File "/Users/CWT/Downloads/darkflow-master/darkflow/net/build.py", line 58, in __init__
        darknet = Darknet(FLAGS)
    
    File "/Users/CWT/Downloads/darkflow-master/darkflow/dark/darknet.py", line 13, in __init__
        self.get_weight_src(FLAGS)
    
    File "/Users/CWT/Downloads/darkflow-master/darkflow/dark/darknet.py", line 47, in get_weight_src
        '{} not found'.format(FLAGS.load)
    

    `AssertionError: bin/yolo-tiny.weights not found``

    Any ideas ?

    opened by dankm8 22
  • Problem with load checkpoint

    Problem with load checkpoint

    I have a litle some error with checkpoint. When I train a model the program save check point in path "./ckpt/cfg/". It work if i load with " --load [numberstep]" but when I want to load last checkpoint with " --load -1 ". The program read checkpoint file in path "./ckpt/". In this path it don't have checkpoint file. The checkpoint file is in ./ckpt/cfg/. image The checkpoint isn't in this folder image

    when i load with "--load -1" image

    opened by beebrain 22
  • ValueError: cannot convert float NaN to integer

    ValueError: cannot convert float NaN to integer

    Dear Thank you very much for posting Yolo in tensorflow. I try the demo and get the following error. Could you please have a look? First, I download the yolotiny.weights from Yolo website.

    Then,

    python clean.py /home/karl/Documents/VOCdevkit/VOC2012/Annotations
    [===================>]100%
    Statistics:
    pottedplant: 13442
    person: 17401
    tvmonitor: 15512
    Dataset size: 26089
    

    At last, I run the test code:

    python tensor.py --test data --model tiny
    I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
    I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
    I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
    I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
    I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
    parsing yolo-tiny.cfg
    Traceback (most recent call last):
      File "tensor.py", line 37, in <module>
        yoloNet = YOLO(FLAGS.model + int(step > 0) * '-{}'.format(step))
      File "/home/karl/Documents/online_code/yolotf/Yolo.py", line 57, in __init__
        self.build(model)
      File "/home/karl/Documents/online_code/yolotf/Yolo.py", line 71, in build
        for i, info in enumerate(layers):
      File "/home/karl/Documents/online_code/yolotf/configs/process.py", line 60, in cfg_yielder
        size = int(size)
    ValueError: cannot convert float NaN to integer
    

    How to fix that? Thank you very much.

    opened by jiankang1991 22
  • tensorflow compatibility with TF 2

    tensorflow compatibility with TF 2

    • Made changes to make compatible with TF 2. x
    • Removed old tf.contrib.layers and replace them with TF Slim
    • converted TF1.x API usage to tf.compat.v1
    • More on this on Tensorflow documentation
    opened by S-FK 0
  • Tensorflow update to TF 2.x

    Tensorflow update to TF 2.x

    Post the Tensorflow has been updated to Tensorflow 2. while running dark flow it's been running into a lot of errors. There is a high need to migrate TensorFlow code from TensorFlow 1. x to TensorFlow 2.

    opened by S-FK 0
  • Bump tensorflow from 1.4.1 to 2.9.3 in /test

    Bump tensorflow from 1.4.1 to 2.9.3 in /test

    Bumps tensorflow from 1.4.1 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Convert annotation video mat files to COCO

    Convert annotation video mat files to COCO

    Hello, I have such a problem. I am part of the Ukrainian team that is developing a detection system that could help save many lives. I'm on the team as a data scientist. But I ran into a problem, I need to convert the annotations to the video in mat format to COCO format in order to train YOLOv7. Help me please. Знімок екрана 2022-10-29 о 19 50 06

    opened by UkranianAndreii 0
  • Training own model

    Training own model

    Dear thrieu,

    I could not get how to train an own model with own classes. Where should I store the images, and annotations to them (like bounding box, class), etc ?

    Could you please provide more information here?

    opened by DenisTis 0
Owner
Trieu
Google Brain Resident 2017-2019. Doing research - engineering projects in Machine Learning - Deep Learning.
Trieu
Saeed Lotfi 28 Dec 12, 2022
Fine-tune pretrained Convolutional Neural Networks with PyTorch

Fine-tune pretrained Convolutional Neural Networks with PyTorch. Features Gives access to the most popular CNN architectures pretrained on ImageNet. A

Alex Parinov 694 Nov 23, 2022
High level network definitions with pre-trained weights in TensorFlow

TensorNets High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 >= TF >= 1.4.0). Guiding principles Applicability.

Taehoon Lee 1k Dec 13, 2022
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI

PyTorch implementation of OpenAI's Finetuned Transformer Language Model This is a PyTorch implementation of the TensorFlow code provided with OpenAI's

Hugging Face 1.4k Jan 5, 2023
A python code to convert Keras pre-trained weights to Pytorch version

Weights_Keras_2_Pytorch 最近想在Pytorch项目里使用一下谷歌的NIMA,但是发现没有预训练好的pytorch权重,于是整理了一下将Keras预训练权重转为Pytorch的代码,目前是支持Keras的Conv2D, Dense, DepthwiseConv2D, Batch

Liu Hengyu 2 Dec 16, 2021
This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Skin Lesion detection using YOLO This project deal

Lalith Veerabhadrappa Badiger 1 Nov 22, 2021
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 5, 2022
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
Monitor your ML jobs on mobile devices📱, especially for Google Colab / Kaggle

TF Watcher TF Watcher is a simple to use Python package and web app which allows you to monitor ?? your Machine Learning training or testing process o

Rishit Dagli 54 Nov 1, 2022
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Codes for ECBSR Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices Xindong Zhang, Hui Zeng, Lei Zhang ACM Multimedia 202

xindong zhang 236 Dec 26, 2022
This repo is official PyTorch implementation of MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021).

Github Code of "MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices" Introduction This repo is official PyTorch implementatio

Choi Sang Bum 203 Jan 5, 2023
A set of tools for converting a darknet dataset to COCO format working with YOLOX

darknet格式数据→COCO darknet训练数据目录结构(详情参见dataset/darknet): darknet ├── class.names ├── gen_config.data ├── gen_train.txt ├── gen_valid.txt └── images

RapidAI-NG 148 Jan 3, 2023
A dead simple python wrapper for darknet that works with OpenCV 4.1, CUDA 10.1

What Dead simple python wrapper for Yolo V3 using AlexyAB's darknet fork. Works with CUDA 10.1 and OpenCV 4.1 or later (I use OpenCV master as of Jun

Pliable Pixels 6 Jan 12, 2022
An Unsupervised Detection Framework for Chinese Jargons in the Darknet

An Unsupervised Detection Framework for Chinese Jargons in the Darknet This repo is the Python 3 implementation of 《An Unsupervised Detection Framewor

null 7 Nov 8, 2022
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 9, 2023
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 4, 2022
Automatically replace ONNX's RandomNormal node with Constant node.

onnx-remove-random-normal This is a script to replace RandomNormal node with Constant node. Example Imagine that we have something ONNX model like the

Masashi Shibata 1 Dec 11, 2021