Keras-retinanet - Keras implementation of RetinaNet object detection.

Overview

Keras RetinaNet Build Status DOI

Keras implementation of RetinaNet object detection as described in Focal Loss for Dense Object Detection by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollár.

⚠️ Deprecated

This repository is deprecated in favor of the torchvision module. This project should work with keras 2.4 and tensorflow 2.3.0, newer versions might break support. For more information, check here.

Installation

  1. Clone this repository.
  2. In the repository, execute pip install . --user. Note that due to inconsistencies with how tensorflow should be installed, this package does not define a dependency on tensorflow as it will try to install that (which at least on Arch Linux results in an incorrect installation). Please make sure tensorflow is installed as per your systems requirements.
  3. Alternatively, you can run the code directly from the cloned repository, however you need to run python setup.py build_ext --inplace to compile Cython code first.
  4. Optionally, install pycocotools if you want to train / test on the MS COCO dataset by running pip install --user git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI.

Testing

An example of testing the network can be seen in this Notebook. In general, inference of the network works as follows:

boxes, scores, labels = model.predict_on_batch(inputs)

Where boxes are shaped (None, None, 4) (for (x1, y1, x2, y2)), scores is shaped (None, None) (classification score) and labels is shaped (None, None) (label corresponding to the score). In all three outputs, the first dimension represents the shape and the second dimension indexes the list of detections.

Loading models can be done in the following manner:

from keras_retinanet.models import load_model
model = load_model('/path/to/model.h5', backbone_name='resnet50')

Execution time on NVIDIA Pascal Titan X is roughly 75msec for an image of shape 1000x800x3.

Converting a training model to inference model

The training procedure of keras-retinanet works with training models. These are stripped down versions compared to the inference model and only contains the layers necessary for training (regression and classification values). If you wish to do inference on a model (perform object detection on an image), you need to convert the trained model to an inference model. This is done as follows:

# Running directly from the repository:
keras_retinanet/bin/convert_model.py /path/to/training/model.h5 /path/to/save/inference/model.h5

# Using the installed script:
retinanet-convert-model /path/to/training/model.h5 /path/to/save/inference/model.h5

Most scripts (like retinanet-evaluate) also support converting on the fly, using the --convert-model argument.

Training

keras-retinanet can be trained using this script. Note that the train script uses relative imports since it is inside the keras_retinanet package. If you want to adjust the script for your own use outside of this repository, you will need to switch it to use absolute imports.

If you installed keras-retinanet correctly, the train script will be installed as retinanet-train. However, if you make local modifications to the keras-retinanet repository, you should run the script directly from the repository. That will ensure that your local changes will be used by the train script.

The default backbone is resnet50. You can change this using the --backbone=xxx argument in the running script. xxx can be one of the backbones in resnet models (resnet50, resnet101, resnet152), mobilenet models (mobilenet128_1.0, mobilenet128_0.75, mobilenet160_1.0, etc), densenet models or vgg models. The different options are defined by each model in their corresponding python scripts (resnet.py, mobilenet.py, etc).

Trained models can't be used directly for inference. To convert a trained model to an inference model, check here.

Usage

For training on Pascal VOC, run:

# Running directly from the repository:
keras_retinanet/bin/train.py pascal /path/to/VOCdevkit/VOC2007

# Using the installed script:
retinanet-train pascal /path/to/VOCdevkit/VOC2007

For training on MS COCO, run:

# Running directly from the repository:
keras_retinanet/bin/train.py coco /path/to/MS/COCO

# Using the installed script:
retinanet-train coco /path/to/MS/COCO

For training on Open Images Dataset OID or taking place to the OID challenges, run:

# Running directly from the repository:
keras_retinanet/bin/train.py oid /path/to/OID

# Using the installed script:
retinanet-train oid /path/to/OID

# You can also specify a list of labels if you want to train on a subset
# by adding the argument 'labels_filter':
keras_retinanet/bin/train.py oid /path/to/OID --labels-filter=Helmet,Tree

# You can also specify a parent label if you want to train on a branch
# from the semantic hierarchical tree (i.e a parent and all children)
(https://storage.googleapis.com/openimages/challenge_2018/bbox_labels_500_hierarchy_visualizer/circle.html)
# by adding the argument 'parent-label':
keras_retinanet/bin/train.py oid /path/to/OID --parent-label=Boat

For training on KITTI, run:

# Running directly from the repository:
keras_retinanet/bin/train.py kitti /path/to/KITTI

# Using the installed script:
retinanet-train kitti /path/to/KITTI

If you want to prepare the dataset you can use the following script:
https://github.com/NVIDIA/DIGITS/blob/master/examples/object-detection/prepare_kitti_data.py

For training on a [custom dataset], a CSV file can be used as a way to pass the data. See below for more details on the format of these CSV files. To train using your CSV, run:

# Running directly from the repository:
keras_retinanet/bin/train.py csv /path/to/csv/file/containing/annotations /path/to/csv/file/containing/classes

# Using the installed script:
retinanet-train csv /path/to/csv/file/containing/annotations /path/to/csv/file/containing/classes

In general, the steps to train on your own datasets are:

  1. Create a model by calling for instance keras_retinanet.models.backbone('resnet50').retinanet(num_classes=80) and compile it. Empirically, the following compile arguments have been found to work well:
model.compile(
    loss={
        'regression'    : keras_retinanet.losses.smooth_l1(),
        'classification': keras_retinanet.losses.focal()
    },
    optimizer=keras.optimizers.Adam(lr=1e-5, clipnorm=0.001)
)
  1. Create generators for training and testing data (an example is show in keras_retinanet.preprocessing.pascal_voc.PascalVocGenerator).
  2. Use model.fit_generator to start training.

Pretrained models

All models can be downloaded from the releases page.

MS COCO

Results using the cocoapi are shown below (note: according to the paper, this configuration should achieve a mAP of 0.357).

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.350
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.537
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.374
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.383
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.472
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.306
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.491
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.533
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.345
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.577
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.681

Open Images Dataset

There are 3 RetinaNet models based on ResNet50, ResNet101 and ResNet152 trained on all 500 classes of the Open Images Dataset (thanks to @ZFTurbo).

Backbone Image Size (px) Small validation mAP LB (Public)
ResNet50 768 - 1024 0.4594 0.4223
ResNet101 768 - 1024 0.4986 0.4520
ResNet152 600 - 800 0.4991 0.4651

For more information, check @ZFTurbo's repository.

CSV datasets

The CSVGenerator provides an easy way to define your own datasets. It uses two CSV files: one file containing annotations and one file containing a class name to ID mapping.

Annotations format

The CSV file with annotations should contain one annotation per line. Images with multiple bounding boxes should use one row per bounding box. Note that indexing for pixel values starts at 0. The expected format of each line is:

path/to/image.jpg,x1,y1,x2,y2,class_name

By default the CSV generator will look for images relative to the directory of the annotations file.

Some images may not contain any labeled objects. To add these images to the dataset as negative examples, add an annotation where x1, y1, x2, y2 and class_name are all empty:

path/to/image.jpg,,,,,

A full example:

/data/imgs/img_001.jpg,837,346,981,456,cow
/data/imgs/img_002.jpg,215,312,279,391,cat
/data/imgs/img_002.jpg,22,5,89,84,bird
/data/imgs/img_003.jpg,,,,,

This defines a dataset with 3 images. img_001.jpg contains a cow. img_002.jpg contains a cat and a bird. img_003.jpg contains no interesting objects/animals.

Class mapping format

The class name to ID mapping file should contain one mapping per line. Each line should use the following format:

class_name,id

Indexing for classes starts at 0. Do not include a background class as it is implicit.

For example:

cow,0
cat,1
bird,2

Anchor optimization

In some cases, the default anchor configuration is not suitable for detecting objects in your dataset, for example, if your objects are smaller than the 32x32px (size of the smallest anchors). In this case, it might be suitable to modify the anchor configuration, this can be done automatically by following the steps in the anchor-optimization repository. To use the generated configuration check here for an example config file and then pass it to train.py using the --config parameter.

Debugging

Creating your own dataset does not always work out of the box. There is a debug.py tool to help find the most common mistakes.

Particularly helpful is the --annotations flag which displays your annotations on the images from your dataset. Annotations are colored in green when there are anchors available and colored in red when there are no anchors available. If an annotation doesn't have anchors available, it means it won't contribute to training. It is normal for a small amount of annotations to show up in red, but if most or all annotations are red there is cause for concern. The most common issues are that the annotations are too small or too oddly shaped (stretched out).

Results

MS COCO

Status

Example output images using keras-retinanet are shown below.

Example result of RetinaNet on MS COCO Example result of RetinaNet on MS COCO Example result of RetinaNet on MS COCO

Projects using keras-retinanet

If you have a project based on keras-retinanet and would like to have it published here, shoot me a message on Slack.

Notes

  • This repository requires Tensorflow 2.3.0 or higher.
  • This repository is tested using OpenCV 3.4.
  • This repository is tested using Python 2.7 and 3.6.

Contributions to this project are welcome.

Discussions

Feel free to join the #keras-retinanet Keras Slack channel for discussions and questions.

FAQ

  • I get the warning UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually., should I be worried? This warning can safely be ignored during inference.
  • I get the error ValueError: not enough values to unpack (expected 3, got 2) during inference, what to do?. This is because you are using a train model to do inference. See https://github.com/fizyr/keras-retinanet#converting-a-training-model-to-inference-model for more information.
  • How do I do transfer learning? The easiest solution is to use the --weights argument when training. Keras will load models, even if the number of classes don't match (it will simply skip loading of weights when there is a mismatch). Run for example retinanet-train --weights snapshots/some_coco_model.h5 pascal /path/to/pascal to transfer weights from a COCO model to a PascalVOC training session. If your dataset is small, you can also use the --freeze-backbone argument to freeze the backbone layers.
  • How do I change the number / shape of the anchors? The train tool allows to pass a configuration file, where the anchor parameters can be adjusted. Check here for an example config file.
  • I get a loss of 0, what is going on? This mostly happens when none of the anchors "fit" on your objects, because they are most likely too small or elongated. You can verify this using the debug tool.
  • I have an older model, can I use it after an update of keras-retinanet? This depends on what has changed. If it is a change that doesn't affect the weights then you can "update" models by creating a new retinanet model, loading your old weights using model.load_weights(weights_path, by_name=True) and saving this model. If the change has been too significant, you should retrain your model (you can try to load in the weights from your old model when starting training, this might be a better starting position than ImageNet).
  • I get the error ModuleNotFoundError: No module named 'keras_retinanet.utils.compute_overlap', how do I fix this? Most likely you are running the code from the cloned repository. This is fine, but you need to compile some extensions for this to work (python setup.py build_ext --inplace).
  • How do I train on my own dataset? The steps to train on your dataset are roughly as follows:
    1. Prepare your dataset in the CSV format (a training and validation split is advised).
    1. Check that your dataset is correct using retinanet-debug.
    1. Train retinanet, preferably using the pretrained COCO weights (this gives a far better starting point, making training much quicker and accurate). You can optionally perform evaluation of your validation set during training to keep track of how well it performs (advised).
    1. Convert your training model to an inference model.
    1. Evaluate your inference model on your test or validation set.
    1. Profit!
Comments
  • pytonpath ???

    pytonpath ???

    hi again,

    I installed the repo as stated in README. in my ~/.bashrc I added "export PYTHONPATH=/home/ivision/keras-retinanet:$PYTHONPATH"

    I am getting the following error message. Is this an installation issue or does it has something to do with my python path? Cause I changed the class names in ~/keras-retinanet/keras_retinanet/preprocessing ...

    Thanks!

    error message:

    File "scripts/train.py", line 205, in <module>
        callbacks=callbacks,
      File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
        return func(*args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 2115, in fit_generator
        generator_output = next(output_generator)
      File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 735, in get
        six.reraise(value.__class__, value, value.__traceback__)
      File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 635, in data_generator_task
        generator_output = next(self._generator)
      File "build/bdist.linux-x86_64/egg/keras_retinanet/preprocessing/generator.py", line 226, in next
      File "build/bdist.linux-x86_64/egg/keras_retinanet/preprocessing/generator.py", line 198, in compute_input_output
      File "build/bdist.linux-x86_64/egg/keras_retinanet/preprocessing/generator.py", line 78, in load_annotations_group
      File "build/bdist.linux-x86_64/egg/keras_retinanet/preprocessing/pascal_voc.py", line 163, in load_annotations
      File "/home/ivision/.local/lib/python2.7/site-packages/six.py", line 737, in raise_from
        raise value
    ValueError: invalid annotations file: im8_scene01031-1071.xml: could not parse object #0: class name 'Projection' not found in classes: ['sheep', 'horse', 'bicycle', 'bottle', 'cow', 'sofa', 'bus', 'dog', 'cat', 'person', 'train', 'diningtable', 'aeroplane', 'car', 'pottedplant', 'tvmonitor', 'chair', 'bird', 'boat', 'motorbike']
    
    opened by nealzGu 49
  • NMS is not working

    NMS is not working

    I am having issue with the NMS. My network trains just fine, but when I use the network to predict, I get a lot of spurious boxes on the object. It seems that the NMS is not working. I am using the version from 3 days ago. Anybody has the same issue or has a solution to this?

    opened by twinanda 36
  • Resnet101 working on one specific dataset only

    Resnet101 working on one specific dataset only

    It works with only one specific dataset with all the images of equal size. If I use different size images it gives this error: tensorflow.python.framework.errors_impl.InvalidArgumentError: Inputs to operation loss/classification_loss/Select_1 of type Select must have the same size and shape. Input 0: [13869,1] != input 1: [13869,3] [[{{node loss/classification_loss/Select_1}}]]

    Can you help me with this?

    opened by MoizTaimuri 34
  • Training slows down

    Training slows down

    I'm training mobilenet on coco with default parameters (batch_size=1, etc, etc, as specified here: https://github.com/fizyr/keras-retinanet/issues/300).

    I've seen that from time to time (every 2, 3 epochs) an epoch takes much more time and i dont understand why.

    Normal epoch speed: 2939s 294ms/step Slow epoch speed: 28566s 3s/step

    I've tried to stop the training and restart it as soon as i see this and then the speed goes normal. I've also tried to benchmark the generator when the training speed gets low but there was no disk IO reading speed issue.

    It is like after 2, 3 epochs the training is slowed down by something and dont know what...

    opened by lvaleriu 34
  • Not able to convert trained model for Tensorflow Serving API

    Not able to convert trained model for Tensorflow Serving API

    First of all, thanks for taking the time to make this amazing repo.

    While using this repo I am stuck in the following situation, I am trying to convert a trained keras model to a compatible format to enable running it using tensorflow serving. I have the following two pieces of code with me which I am trying to use to convert.

    1. Using model variable:
    from __future__ import print_function
    import os
    import shutil
    
    from glob import glob
    
    from keras.models import load_model
    from keras.preprocessing.image import img_to_array
    from keras.applications import imagenet_utils
    
    from tensorflow.python.keras.estimator import model_to_estimator
    import tensorflow as tf
    import keras_resnet
    
    from tensorflow.python.keras._impl.keras.models import Model
    from keras_retinanet import models
    import keras_resnet.models
    from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
    from keras_retinanet.utils.visualization import draw_box, draw_caption
    from keras_retinanet.utils.colors import label_color
    
    def export_for_serving(model_path, model):
      '''
      Converts model to the TensorFlow estimator and saves it to the disk
      :param model: keras model to prepare for serving
      '''
      export_dir = 'tf_serving_model/'
      if os.path.exists(export_dir):
        shutil.rmtree(export_dir)
    
      tf_estimator = model_to_estimator(keras_model=model)
    
      tf_estimator.export_savedmodel(
        export_dir,
        serving_input_receiver_fn,
        strip_default_attrs=True)
    
    model_path = os.path.join('.', 'snapshots', 'resnet50_csv_15_model.h5')
    model = Model(models.load_model(model_path, backbone_name='resnet50'))
    print("loaded model")
    export_for_serving(model_path=model_path, model=model)
    

    Running this throws the following error:

    Traceback (most recent call last):  File "keras_to_tensorflow_serving.py", line 42, in <module>
       model = Model(models.load_model(model_path, backbone_name='resnet50'))
    TypeError: __init__() missing 1 required positional argument: 'outputs'
    
    1. Using the .h5 file stored on disk:
    from __future__ import print_function
    import os
    import shutil
    
    from glob import glob
    
    from keras.models import load_model
    
    from keras.preprocessing.image import img_to_array
    from keras.applications import imagenet_utils
    
    from tensorflow.python.keras.estimator import model_to_estimator
    import tensorflow as tf
    import keras_resnet
    
    from keras_retinanet import models
    import keras_resnet.models
    from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
    from keras_retinanet.utils.visualization import draw_box, draw_caption
    from keras_retinanet.utils.colors import label_color
    
    def export_for_serving(model_path, model):
      '''
      Converts model to the TensorFlow estimator and saves it to the disk
      :param model: keras model to prepare for serving
      '''
      export_dir = 'tf_serving_model/'
      if os.path.exists(export_dir):
        shutil.rmtree(export_dir)
    
      tf_estimator = model_to_estimator(keras_model_path=model_path)
    
      tf_estimator.export_savedmodel(
        export_dir,
        serving_input_receiver_fn,
        strip_default_attrs=True)
    
    model_path = os.path.join('.', 'snapshots', 'resnet50_csv_15_model.h5')
    model = models.load_model(model_path, backbone_name='resnet50')
    print("Loaded Model")
    export_for_serving(model_path=model_path, model=model)
    

    I am receiving the following error:

    Traceback (most recent call last):
     File "keras_to_tensorflow_serving.py", line 42, in <module>
       export_for_serving(model_path=model_path, model=model)
     File "keras_to_tensorflow_serving.py", line 32, in export_for_serving
       tf_estimator = model_to_estimator(keras_model_path=model_path, custom_objects=custom_objects)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/estimator.py", line 456, in model_to_estimator
       keras_model = models.load_model(keras_model_path)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/saving.py", line 240, in load_model
       model = model_from_config(model_config, custom_objects=custom_objects)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/saving.py", line 317, in model_from_config
       return deserialize(config, custom_objects=custom_objects)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/layers/serialization.py", line 63, in deserialize
       printable_module_name='layer')
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/utils/generic_utils.py", line 171, in deserialize_keras_object
       list(custom_objects.items())))
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/network.py", line 1060, in from_config
       process_layer(layer_data)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/network.py", line 1046, in process_layer
       layer = deserialize_layer(layer_data, custom_objects=custom_objects)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/layers/serialization.py", line 63, in deserialize
       printable_module_name='layer')
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/utils/generic_utils.py", line 173, in deserialize_keras_object
       return cls.from_config(config['config'])
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py", line 473, in from_config
       return cls(**config)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/layers/normalization.py", line 107, in __init__
       **kwargs
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/layers/normalization.py", line 146, in __init__
       name=name, trainable=trainable, **kwargs)
     File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py", line 128, in __init__
       raise TypeError('Keyword argument not understood:', kwarg)
    TypeError: ('Keyword argument not understood:', u'freeze')
    

    I am using the latest version of the repo using Keras 2.2.0. Also, the model I am trying to convert is trained using the same latest version of the repo.

    Can anyone please point out the mistake I am making. Any help would be greatly appreciated. Thanks in advance.

    opened by vivekpd15 29
  • efficientnet backbone

    efficientnet backbone

    I am looking into optimizing a retinanet model by using efficientnet. Would it be possible to use the pretrained models available from Google's TPU repository directly?

    I know that #233 mentions that a model needs to have 5 stages, although from the paper it seems like efficientnet has 9 stages for its B0 model (correct me if I'm wrong).

    enhancement feature request 
    opened by lukasschmit 27
  • StopIteration: attempt to get argmax of an empty sequence

    StopIteration: attempt to get argmax of an empty sequence

    Hi again,

    Im getting the following error: (most recent call last): File "examples/train_pascal.py", line 106, in keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0), File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 2046, in fit_generator generator_output = next(output_generator) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 518, in get raise StopIteration(e) StopIteration: attempt to get argmax of an empty sequence

    From here it seems as if its a problem of no significant overlap? I checked the image and there should be eleven bboxes in there, covering almost all the space, so Im not sure if that's really the error here.

    Any ideas?

    I will remove the image for now and continue training, to check if its just the image...

    opened by nealzGu 27
  • Updates to debug.py

    Updates to debug.py

    Two small updates to debug.py that make my life easier and might be nice for others:

    1. Automatically adjust forward/back keys to "m" and "n" if on Mac

    2. Create a "--no-gui" mode, whereby instead of opening an image browser debug will loop through each image and write out a .png file with the annotations to the current directory.

    opened by jnmaloof 25
  • How to deal with multiple predictions (with different classes) on a same target

    How to deal with multiple predictions (with different classes) on a same target

    Hello,

    First of all thank you so much for the very valuable implementation! It works consistently well on diverse problems for me.

    I would like to ask for your advice on an issue that I encountered often when using the code: a target can get two predicted boxes, with two different classes. How should I best deal with this?

    Of course the simplest solution is to keep only the one with the highest score. I'm also thinking of another solution that is to take the average of the classification output vectors, then use that vector to predict the class.

    Could I have your opinion please? Thank you again!

    opened by netw0rkf10w 25
  • Performance of batch_size > 1

    Performance of batch_size > 1

    Theoretically batch_size > 1 should be working, practically however performance appears to degrade. I've looked at the data generator and loss functions but everything appears to be fine.

    I'm not sure where the degradation of performance comes from, perhaps a fresh set of eyes can help uncover the issue? My intuition expects the problem to be in the loss function, or a deeper issue in Keras / Tensorflow perhaps.

    In extension, this also breaks multi GPU support since that requires batch_size > 1.

    @awilliamson I think you ran some tests on this right? Do you still have them stored somewhere? Can you share them?

    bug help wanted 
    opened by hgaiser 25
  • Pretrained models for other backbone models

    Pretrained models for other backbone models

    Hi,

    Thank you for the great work! Is there any chance you may release the pretrained models for other backbone models, e.g. resnet101, resnet152 or mobilenet128_1.0, mobilenet128_0.75, mobilenet160_1.0? Currently we only have pretrained models for resnet50.

    That would be super helpful for transfer learning. Otherwise, I might need to train on COCO from scratch.

    Thanks a lot!

    help wanted 
    opened by ChengshuLi 25
  • Added support for HDF5 dataset and an HDF5 creation tool

    Added support for HDF5 dataset and an HDF5 creation tool

    Added retinanet-build-hdf5 entry point which allows the creation of datasets in the hdf5 format and a new option 'hdf5' to retinanet-train. This allows the dataset to be loaded in main memory the whole time and drastically reduces training times.

    opened by madisi98 2
  • Add cutmix generator

    Add cutmix generator

    Add cutmix generator for beter results. The cutmix generator can be useful for training models and achieving better results. This will help take existing models to a higher level of quality.

    Resources that may be useful:

    • https://arxiv.org/abs/1905.04899 - the original article
    • https://github.com/clovaai/CutMix-PyTorch - a pytorch implementation
    • https://github.com/DevBruce/CutMixImageDataGenerator_For_Keras - keras implenentation (not compartable with retinanet)
    enhancement 
    opened by gosha20777 1
  • Travis tests no longer pass.

    Travis tests no longer pass.

    The travis tests no longer pass as can be seen here: https://travis-ci.org/fizyr/keras-retinanet/builds/592460276 .

    At a glance, it seems to be a problem with an incompatible version of tensorflow, but I didn't dig very deep.

    bug enhancement 
    opened by de-vri-es 29
Releases(0.5.1)
  • 0.5.1(Jun 20, 2019)

  • 0.5.0(Oct 17, 2018)

    Changes since last release

    • Evaluation uses progressbar
    • Correct initialization of weights for classification submodel
    • Fix issue with evaluating when there are gaps in classes
    • Add configuration (currently only for anchor settings)
    • Refactor how annotation are generated in the generators
    • Use CPU to convert model
    • Update to keras 2.2.4
    • Add NCHW support

    Credits to @adreo00 @borakrc @yecharlie @ddowling @enricoliscio @hgaiser @baek-jinoo @de-vri-es @penguinmenac3 Morten Back Nielsen @relh @vcarpani

    Source code(tar.gz)
    Source code(zip)
  • 0.4.1(Jul 18, 2018)

    Changes since last release

    • Optimizations for generators
    • Improved documentation.
    • OID Challenge 2018 support.
    • Keras version bumped to 2.2.0.
    • Add option for class specific filtering (NMS).
    • Add flake8 for code testing.
    • Merged COCO and non-COCO evaluation scripts.
    • Correct image preprocessing for MobileNet and DenseNet.

    Credits to: @apacha @hgaiser @de-vri-es @lvaleriu @cclauss @HolyGuacamole @leonardvandriel @PhilippMarquardt @vcarpani

    Source code(tar.gz)
    Source code(zip)
    resnet50_coco_best_v2.1.0.h5(145.58 MB)
  • 0.3.1(May 12, 2018)

    Changes since last release

    • Implement DenseNet, VGG backbones.
    • Add option to freeze backbone layers.
    • Add logging of evaluation to tensorboard.
    • Add pretty colors for 80 classes.
    • Fix batch_size > 1 issues.
    • Refactor model outputs (should hopefully stay like this now).
    • Simplified training by splitting into "training model" and "inference model".
    • Add structure for backbone specific functions (such as load_model).
    • Encode regression as x1/y1/x2/y2 offsets (increases mAP to 0.350, previously 0.345).
    • Use nearest upsampling method.

    Credits to: @vidosits @cgratie @DiegoAgher @eduramiba @GuillaumeErhard @Muhannes @hgaiser @iver56 @jjiunlin @srslynow @de-vri-es @Ori226 @pedroconceicao @pderian @rodrigo2019 @lvaleriu @yhenon

    Source code(tar.gz)
    Source code(zip)
    resnet50_coco_best_v2.1.0.h5(145.58 MB)
  • 0.2(Mar 3, 2018)

    Changes since last release

    • Corrected FPN architecture as per paper.
    • Set default image size to minimum of 800px.
    • Change NMS to perform per-class NMS.
    • Small correction for bbox transform.
    • Add OID data generator.
    • Change default NMS threshold to 0.5.
    • Add MobileNet backbone.
    • Add tensorboard callback.
    • Add tool for debugging datasets.
    • Improve speed of data augmentation methods.
    • Add ability to resume training.
    • Add evaluation tool for custom datasets (only computes mAP at the moment).
    • Add skip_mismatch to weights loading, allows transfer learning from pretrained COCO model.

    Credits to: @awilliamson @hgaiser @de-vri-es @mxvs @wassname @mkocabas @lvaleriu

    Source code(tar.gz)
    Source code(zip)
    resnet50_coco_best_v2.0.1.h5(145.58 MB)
    resnet50_coco_best_v2.0.2.h5(145.58 MB)
    resnet50_coco_best_v2.0.3.h5(145.58 MB)
Owner
Fizyr
Artificial intelligence for vision guided robotics. We're hiring! https://fizyr.com/careers/
Fizyr
An implementation of RetinaNet in PyTorch.

RetinaNet An implementation of RetinaNet in PyTorch. Installation Training COCO 2017 Pascal VOC Custom Dataset Evaluation Todo Credits Installation In

Conner Vercellino 297 Jan 4, 2023
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 3, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Si Adek Keras is software VR dangerous object detection.

Si Adek Python Keras Sistem Informasi Deteksi Benda Berbahaya Keras Python. Version 1.0 Developed by Ananda Rauf Maududi. Developed date: 24 November

Ananda Rauf 1 Dec 21, 2021
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Mask R-CNN for Object Detection and Segmentation This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bound

Matterport, Inc 22.5k Jan 4, 2023
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 4, 2023
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
Auto-Lama combines object detection and image inpainting to automate object removals

Auto-Lama Auto-Lama combines object detection and image inpainting to automate object removals. It is build on top of DE:TR from Facebook Research and

null 44 Dec 9, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
object detection; robust detection; ACM MM21 grand challenge; Security AI Challenger Phase VII

赛题背景 在商品知识产权领域,知识产权体现为在线商品的设计和品牌。不幸的是,在每一天,存在着非法商户通过一些对抗手段干扰商标识别来逃避侵权,这带来了很高的知识产权风险和财务损失。为了促进先进的多媒体人工智能技术的发展,以保护企业来之不易的创作和想法免受恶意使用和剽窃,因此提出了鲁棒性标识检测挑战赛

null 65 Dec 22, 2022