Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Overview

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Build Status

By Andres Milioto @ University of Bonn.

(for the new Pytorch version, go here)

Image of cityscapes Cityscapes Urban Scene understanding.

Image of Persons Person Segmentation

Image of cwc Crop vs. Weed Semantic Segmentation.

Description

This code provides a framework to easily add architectures and datasets, in order to train and deploy CNNs for a robot. It contains a full training pipeline in python using Tensorflow and OpenCV, and it also some C++ apps to deploy a frozen protobuf in ROS and standalone. The C++ library is made in a way which allows to add other backends (such as TensorRT and MvNCS), but only Tensorflow and TensorRT are implemented for now. For now, we will keep it this way because we are mostly interested in deployment for the Jetson and Drive platforms, but if you have a specific need, we accept pull requests!

The networks included is based of of many other architectures (see below), but not exactly a copy of any of them. As seen in the videos, they run very fast in both GPU and CPU, and they are designed with performance in mind, at the cost of a slight accuracy loss. Feel free to use it as a model to implement your own architecture.

All scripts have been tested on the following configurations:

  • x86 Ubuntu 16.04 with an NVIDIA GeForce 940MX GPU (nvidia-384, CUDA9, CUDNN7, TF 1.7, TensorRT3)
  • x86 Ubuntu 16.04 with an NVIDIA GTX1080Ti GPU (nvidia-375, CUDA9, CUDNN7, TF 1.7, TensorRT3)
  • x86 Ubuntu 16.04 and 14.04 with no GPU (TF 1.7, running on CPU in NHWC mode, no TensorRT support)
  • Jetson TX2 (full Jetpack 3.2)

We also provide a Dockerfile to make it easy to run without worrying about the dependencies, which is based on the official nvidia/cuda image containing cuda9 and cudnn7. In order to build and run this image with support for X11 (to display the results), you can run this in the repo root directory (nvidia-docker should be used instead of vainilla docker):

  $ docker pull tano297/bonnet:cuda9-cudnn7-tf17-trt304
  $ nvidia-docker build -t bonnet .
  $ nvidia-docker run -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:/home/developer/.Xauthority -v /home/$USER/data:/shared --net=host --pid=host --ipc=host bonnet /bin/bash

-v /home/$USER/data:/share can be replaced to point to wherever you store the data and trained models, in order to include the data inside the container for inference/training.

Deployment

  • /deploy_cpp contains C++ code for deployment on robot of the full pipeline, which takes an image as input and produces the pixel-wise predictions as output, and the color masks (which depend on the problem). It includes both standalone operation which is meant as an example of usage and build, and a ROS node which takes a topic with an image and outputs 2 topics with the labeled mask and the colored labeled mask.

  • Readme here

Training

  • /train_py contains Python code to easily build CNN Graphs in Tensorflow, train, and generate the trained models used for deployment. This way the interface with Tensorflow can use the more complete Python API and we can easily work with files to augment datasets and so on. It also contains some apps for using models, which includes the ability to save and use a frozen protobuf, and to use the network using TensorRT, which reduces the time for inference when using NVIDIA GPUs.

  • Readme here

Pre-trained models

These are some models trained on some sample datasets that you can use with the trainer and deployer, but if you want to take time to write the parsers for another dataset (yaml file with classes and colors + python script to put the data into the standard dataset format) feel free to create a pull request.

If you don't have GPUs and the task is interesting for robots to exploit, I will gladly train it whenever I have some free GPU time in our servers.

  • Cityscapes:

    • 512x256 Link
    • 768x384 Link (inception-like model)
    • 768x384 Link (mobilenets-like model)
    • 1024x512 Link
  • Synthia:

  • Persons (+coco people):

  • Crop-Weed (CWC):

License

This software

Bonnet is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Bonnet is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Pretrained models

The pretrained models with a specific dataset keep the copyright of such dataset.

Citation

If you use our framework for any academic work, please cite its paper.

@InProceedings{milioto2019icra,
author = {A. Milioto and C. Stachniss},
title = {{Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics using CNNs}},
booktitle = {Proc. of the IEEE Intl. Conf. on Robotics \& Automation (ICRA)},
year = 2019,
codeurl = {https://github.com/Photogrammetry-Robotics-Bonn/bonnet},
videourl = {https://www.youtube.com/watch?v=tfeFHCq6YJs},
}

Our networks are strongly based on the following architectures, so if you use them for any academic work, please give a look at their papers and cite them if you think proper:

Other useful GitHub's:

  • OpenAI Checkpointed Gradients. Useful implementation of checkpointed gradients to be able to fit big models in GPU memory without sacrificing runtime.
  • Queueing tool: Very nice queueing tool to share GPU, CPU and Memory resources in a multi-GPU environment.
  • Tensorflow_cc: Very useful repo to compile Tensorflow either as a shared or static library using CMake, in order to be able to compile our C++ apps against it.

Contributors

Milioto, Andres

Special thanks to Philipp Lottes for all the work shared during the last year, and to Olga Vysotka and Susanne Wenzel for beta testing the framework :)

Acknowledgements

This work has partly been supported by the German Research Foundation under Germany's Excellence Strategy, EXC-2070 - 390732324 (PhenoRob). We also thank NVIDIA Corporation for providing a Quadro P6000 GPU partially used to develop this framework.

TODOs

  • Merge Crop-weed CNN with background knowledge into this repo.
  • Make multi-camera ROS node that exploits batching to make inference faster than sequentially.
  • Movidius Neural Stick C++ backends (plus others as they become available).
  • Inference node to show the classes selectively (e.g. with some qt visual GUI)
Comments
  • nodes.yaml is missing?

    nodes.yaml is missing?

    This project is very attractive! But when I tried it, the nodes.yaml seems missing. Is this file missed?

    I'm using the pre-trained model: cityspaces 512x256 and 1024x512

    log: Successfully created log directory: log Unable to create network. Invalid yaml file city_512/nodes.yaml

    opened by susu3621 8
  • MaxWorkSpace

    MaxWorkSpace

    Hey Andres,

    In the NetTRT.cpp code, the line:

    _builder->setMaxWorkspaceSize(1 << size);

    I assume this means the amount of GPU mem allocated for the network? You use size 32. What does this mean in this context?

    If i wanted to deploy two models (two .uff files), on one GPU, how do you recommend i proceed?

    opened by JadBatmobile 6
  • input_norm_and_resized_node vs input_node

    input_norm_and_resized_node vs input_node

    Hello,

    I am curious what the difference between the input_node and the input_norm_and_resized_node are?

    I see in deploy_cpp that netTF.cpp utilizes input_node, and netTRT.cpp utilizes input_norm_and_resized_node.

    opened by JadBatmobile 6
  • ros catkin build can't find TensorRT

    ros catkin build can't find TensorRT

    Hi, I was trying to deploy bonnet on NVIDIA DRIVE PX2. I think TensorRT is already installed for PX2. But when I tried to catkin build, it complained didn't find Tensorflow or TensorRT.. Is there any specific setup I need to do to fix this problem?

    tensorrt

    opened by cshort101 6
  • Steps to use ROS node with docker image?

    Steps to use ROS node with docker image?

    Hi,

    I am using the docker image. I don't quite understand how to get the ROS node running.

    $ roslaunch bonnet_run bonnet_run.launch
    ...
    [ INFO] [1525824328.942834758]: Successfully launched node.
    Unable to create network. 
    Invalid yaml file /home/developer/Desktop/frozen_trial//train.yaml
    [ERROR] [1525824328.942988464]: SOMETHING WENT WRONG INITIALIZING CNN. EXITING
    [bonnet_node-2] process has died [pid 28742, exit code 1, cmd /home/developer/catkin_ws/devel/lib/bonnet_run/bonnet_node __name:=bonnet_node __log:=/home/developer/.ros/log/ae1c784a-531c-11e8-9312-38d547df6dab/bonnet_node-2.log].
    log file: /home/developer/.ros/log/ae1c784a-531c-11e8-9312-38d547df6dab/bonnet_node-2*.log
    

    It makes sense there is no model at that path, but I'm not sure how to proceed. In particular, I passed a model in through one of the args when starting docker as mentioned in the readme, but I don't know how to access this directory within the docker. If I just want to test a pre-trained model, does it make sense to download within the docker and freeze it?

    I greatly appreciate all the documentation you have so far!

    opened by mfe7 6
  • How to get datasets

    How to get datasets

    Hi, I'm trying to run this but can't figure out how to add the required datasets. Already tried it with the Cityscapes one, but it has a different structure than the one the system seems to require. Am I supposed to download them from somewhere and put them in a particular folder in a specific way?

    opened by lccatala 5
  • error: no matching function for call to ‘nvuffparser::IUffParser

    error: no matching function for call to ‘nvuffparser::IUffParser

    Hello,

    I am attempting to use deploy_cpp with TensorRT backend.

    I have installted TensorRT 4 succesfully,

    when i run catkin_make, it throws the warning that i do not have tensorflow_cc, but that tensorRT is found. I am under the impression that should be enough?

    The following is what i am seeing /home/ubuntu/bonnet_ws/src/bonnet/deploy_cpp/src/lib/src/netTRT.cpp:113:56: error: no matching function for call to ‘nvuffparser::IUffParser::registerInput(const char*, nvinfer1::DimsCHW&)’ _parser->registerInput(_input_node.c_str(), inputDims);

    enhancement 
    opened by JadBatmobile 5
  • [build] Error: Unable to find source space `/bonnet/src`

    [build] Error: Unable to find source space `/bonnet/src`

    I run the following commands

    $ cd bonnet/deploy_cpp $ catkin init $ catkin build bonnet_standalone

    trying to build I get the following

    Source Space: [missing] /bonnet/src Log Space: [missing] /bonnet/logs Build Space: [missing] /bonnet/build Devel Space: [missing] /bonnet/devel Install Space: [unused] /bonnet/install DESTDIR: [unused] None

    opened by AhmedElsafy 5
  • Using /gpu:1

    Using /gpu:1

    Hi, I'm currently building bonnet application in PX2, but I'm not using ROS. At the moment, I can install the bonnet. But when I run bonnet, there is the problem of GPU device. In PX2, we have 2 gpu: iGPU and dGPU. The iGPU is the one only support fp16 and its locate in /gpu:1. I just want to ask is there any way to make Bonnet using /gpu:1, because at the moment it only accpet for gpu:0.

    Best regards Thanh

    opened by thanhngotud 4
  • Jetson TX1

    Jetson TX1

    I was able to run the inference using the python files on a host PC and on the Jetson. However I could not run the Tensorrt optimised cnn_use_pb_tRT on the Jetson because Tensorrt python libraries aren't available for the arm64 environment of Jetson. Just to clarify to actually compile the deploy cop

    1. build Basel from source for arm64
    2. Build the tensorflow_cc external install for arm64
    3. Then use catkin to build the executable right? Thank you.
    opened by 181charan 4
  • roslaunch bonnet_run bonnet_run.launch error?

    roslaunch bonnet_run bonnet_run.launch error?

    This is the docker I run, i run roslaunch bonnet_run bonnet_run.launch

    Unable to create network. Invalid yaml file /shared/city_512//nodes.yaml [ERROR] [1526639307.186656472]: SOMETHING WENT WRONG INITIALIZING CNN. EXITING [bonnet_node-2] process has died [pid 27831, exit code 1, cmd /home/developer/catkin_ws/devel/lib/bonnet_run/bonnet_node __name:=bonnet_node __log:=/home/developer/.ros/log/32c8b5d4-5a86-11e8-b58d-507b9deaef1f/bonnet_node-2.log]. log file: /home/developer/.ros/log/32c8b5d4-5a86-11e8-b58d-507b9deaef1f/bonnet_node-2*.log

    opened by ytgcljj 4
  • Bump tensorflow from 1.7.0 to 2.9.3 in /train_py

    Bump tensorflow from 1.7.0 to 2.9.3 in /train_py

    Bumps tensorflow from 1.7.0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow-gpu from 1.7.0 to 2.9.3 in /train_py

    Bump tensorflow-gpu from 1.7.0 to 2.9.3 in /train_py

    Bumps tensorflow-gpu from 1.7.0 to 2.9.3.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump numpy from 1.14.0 to 1.22.0 in /train_py

    Bump numpy from 1.14.0 to 1.22.0 in /train_py

    Bumps numpy from 1.14.0 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Error creating log directory. Check permissions! Exiting...

    Error creating log directory. Check permissions! Exiting...

    ./cnn_train.py -d data.yaml -n net.yaml -t train.yaml -l /home/wuxin/YZH/object_detect/bonnet/train_py/log

    INTERFACE: data yaml: data.yaml net yaml: net.yaml train yaml: train.yaml log dir /home/wuxin/YZH/object_detect/bonnet/train_py/log model path None model type iou

    Commit hash (training version): b'7bf03b0'

    Opening desired data file data.yaml Opening desired net file net.yaml Opening desired train file train.yaml Error creating log directory. Check permissions! Exiting...

    what should i do?

    opened by Wuxinxiaoshifu 0
  • Bump opencv-python from 3.4.0.12 to 4.2.0.32 in /train_py

    Bump opencv-python from 3.4.0.12 to 4.2.0.32 in /train_py

    Bumps opencv-python from 3.4.0.12 to 4.2.0.32.

    Release notes

    Sourced from opencv-python's releases.

    4.2.0.32

    OpenCV version 4.2.0.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    3.4.9.31

    OpenCV version 3.4.9.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    4.1.2.30

    OpenCV version 4.1.2.

    Changes:

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pyyaml from 3.12 to 5.4 in /train_py

    Bump pyyaml from 3.12 to 5.4 in /train_py

    Bumps pyyaml from 3.12 to 5.4.

    Changelog

    Sourced from pyyaml's changelog.

    5.4 (2021-01-19)

    5.3.1 (2020-03-18)

    • yaml/pyyaml#386 -- Prevents arbitrary code execution during python/object/new constructor

    5.3 (2020-01-06)

    5.2 (2019-12-02)

    • Repair incompatibilities introduced with 5.1. The default Loader was changed, but several methods like add_constructor still used the old default yaml/pyyaml#279 -- A more flexible fix for custom tag constructors yaml/pyyaml#287 -- Change default loader for yaml.add_constructor yaml/pyyaml#305 -- Change default loader for add_implicit_resolver, add_path_resolver
    • Make FullLoader safer by removing python/object/apply from the default FullLoader yaml/pyyaml#347 -- Move constructor for object/apply to UnsafeConstructor
    • Fix bug introduced in 5.1 where quoting went wrong on systems with sys.maxunicode <= 0xffff yaml/pyyaml#276 -- Fix logic for quoting special characters
    • Other PRs: yaml/pyyaml#280 -- Update CHANGES for 5.1

    5.1.2 (2019-07-30)

    • Re-release of 5.1 with regenerated Cython sources to build properly for Python 3.8b2+

    ... (truncated)

    Commits
    • 58d0cb7 5.4 release
    • a60f7a1 Fix compatibility with Jython
    • ee98abd Run CI on PR base branch changes
    • ddf2033 constructor.timezone: _copy & deepcopy
    • fc914d5 Avoid repeatedly appending to yaml_implicit_resolvers
    • a001f27 Fix for CVE-2020-14343
    • fe15062 Add 3.9 to appveyor file for completeness sake
    • 1e1c7fb Add a newline character to end of pyproject.toml
    • 0b6b7d6 Start sentences and phrases for capital letters
    • c976915 Shell code improvements
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 51 Dec 27, 2022
YARR is Yet Another Robotics and Reinforcement learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 21 Aug 1, 2021
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper.

Self-Supervised Policy Adaptation during Deployment PyTorch implementation of PAD and evaluation benchmarks from Self-Supervised Policy Adaptation dur

Nicklas Hansen 101 Nov 1, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

null 43 Nov 21, 2022
BMW TechOffice MUNICH 148 Dec 21, 2022
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 75 Nov 9, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 7, 2023
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

QUT Centre for Robotics (QCR) 13 Nov 26, 2022
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Daniil Pakhomov 134 Dec 19, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

null 44 Dec 12, 2022
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 3, 2023