A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Overview

Object Pose Estimation Demo

License

This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain experience integrating ROS with Unity, importing URDF models, collecting labeled training data, and training and deploying a deep learning model. By the end of this tutorial, you will be able to perform pick-and-place with a robot arm in Unity, using computer vision to perceive the object the robot picks up.

Want to skip the tutorial and run the full demo? Check out our Quick Demo.

Want to skip the tutorial and focus on collecting training data for the deep learning model? Check out our Quick Data-Collection Demo.

Note: This project has been developed with Python 3 and ROS Noetic.

Table of Contents


Part 1: Create Unity Scene with Imported URDF

This part includes downloading and installing the Unity Editor, setting up a basic Unity scene, and importing a robot. We will import the UR3 robot arm using the URDF Importer package.


Part 2: Setup the Scene for Data Collection

This part focuses on setting up the scene for data collection using the Unity Computer Vision Perception Package. You will learn how to use Perception Package Randomizers to randomize aspects of the scene in order to create variety in the training data.

If you would like to learn more about Randomizers, and apply domain randomization to this scene more thoroughly, check out our further exercises for the reader here.


Part 3: Data Collection and Model Training

This part includes running data collection with the Perception Package, and using that data to train a deep learning model. The training step can take some time. If you'd like, you can skip that step by using our pre-trained model.

To measure the success of grasping in simulation using our pre-trained model for pose estimation, we did 100 trials and got the following results:

Success Failures Percent Success
Without occlusion 82 5 94
With occlusion 7 6 54
All 89 11 89

Note: Data for the above experiment was collected in Unity 2020.2.1f1.


Part 4: Pick-and-Place

This part includes the preparation and setup necessary to run a pick-and-place task using MoveIt. Here, the cube pose is predicted by the trained deep learning model. Steps covered include:

  • Creating and invoking a motion planning service in ROS
  • Sending captured RGB images from our scene to the ROS Pose Estimation node for inference
  • Using a Python script to run inference on our trained deep learning model
  • Moving Unity Articulation Bodies based on a calculated trajectory
  • Controlling a gripping tool to successfully grasp and drop an object.

Support

For general questions, feedback, or feature requests, connect directly with the Robotics team at [email protected].

For bugs or other issues, please file a GitHub issue and the Robotics team will investigate the issue as soon as possible.

More from Unity Robotics

Visit the Robotics Hub for more tutorials, tools, and information on robotics simulation in Unity!

License

Apache License 2.0

Comments
  • Do I have the possibility to run the training by google colab?

    Do I have the possibility to run the training by google colab?

    Because of not being privileged with a good machine to work the training of CNN Vgg 16, I would like to run part 3 of the tutorial with the graphics card of google. Is it possible to run in that environment? If yes could explain me in the best possible way.

    Thanks :)

    opened by RockStheff 14
  • Not compatible with latest perception sdk build 0.8.0-preview.3

    Not compatible with latest perception sdk build 0.8.0-preview.3

    While importing scene, PoseEstimationScenario.cs has errors: 1. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(27,26): error CS0507: 'PoseEstimationScenario.isIterationComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isIterationComplete' 2. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(28,26): error CS0507: 'PoseEstimationScenario.isScenarioComplete': cannot change access modifiers when overriding 'protected' inherited member 'ScenarioBase.isScenarioComplete' 3. Assets/TutorialAssets/Scripts/PoseEstimationScenario.cs(10,14): error CS0534: 'PoseEstimationScenario' does not implement inherited abstract member 'ScenarioBase.isScenarioReadyToStart.get'

    Modified the file to fix these errors and generated data. Noticed that metrics.json are not being created only captures.json are. When starting training, it gives an error in between looking for metrics data. Had to revert back 0.7.0-preview.2 build to regenerate data for training.

    Can this be updated to be compatible with perception 0.8.0-preview.3 (or the latest) build. It is suspected that the previous builds had some bug in them which gave erroneous bounding box data which led to poor training results for pose estimation.

    opened by arunabaijal 12
  • Problems in step 2 of the tutorial - Add and Set Up Randomizers

    Problems in step 2 of the tutorial - Add and Set Up Randomizers

    Hi there, I was trying to follow the steps of the tutorial, and I came across an impasse. More specifically, in Part 2, The moment that I search the scripts in c#, to add them in the game Object "Simulation Scenario", appears in the search bar as "not found". -> At this stage!

    image

    I tried to test in different versions of Unity editor, and did not succeed. Starting from the topic "Domain Randomization" in some C# scripts are not being recognized as a component in a given Game Object. Could you steer me somehow??? Thank you in advance.

    opened by RockStheff 6
  • fixed gpu error

    fixed gpu error

    #52 modification still cause following error. -> Error processing request: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

    So we need to convert output type to cpu from gpu.

    Note: If you use docker environment, please add --gpus all option.

    docker run -it --rm --gpus all -p 10000:10000 -p 5005:5005 unity-robotics:pose-estimation /bin/bash
    

    You can also use nvidia-smi command in docker whether gpu is enable or not.

    opened by adakoda 5
  • How to add custom messages to the ROS-Unity communication

    How to add custom messages to the ROS-Unity communication

    It would be great if you could briefly show us how to add custom ROS messages to the system. For example I'm trying to stream camera images from Unity to ROS.

    opened by tensarflow 5
  • Pose Estimation not working correctly

    Pose Estimation not working correctly

    Describe the bug

    The pose estimation is not executed correctly. I get an error regarding model weights and input not being on the same device. When I change this line to this

        device = torch.device("cpu")
    

    it works fine.

    To Reproduce

    Used the demo Unity project, therefore did not everything in the 4 readme's.

    Console logs / stack traces

    [ERROR] [1640807467.034139]: Error processing request: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
    ['Traceback (most recent call last):\n', '  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 633, in _handle_request\n    response = convert_return_to_response(self.handler(request), self.response_class)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 96, in pose_estimation_main\n    est_position, est_rotation = _run_model(image_path)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/scripts/pose_estimation_script.py", line 52, in _run_model\n    output = run_model_main(image_path, MODEL_PATH)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 138, in run_model_main\n    output_translation, output_orientation = model(torch.stack(image).reshape(-1, 3, 224, 224))\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/home/ensar/Robotics-Object-Pose-Estimation/ROS/src/ur3_moveit/src/ur3_moveit/setup_and_run_model.py", line 54, in forward\n    x = self.model_backbone(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torchvision/models/vgg.py", line 43, in forward\n    x = self.features(x)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward\n    input = module(input)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl\n    result = self.forward(*input, **kwargs)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 423, in forward\n    return self._conv_forward(input, self.weight)\n', '  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 419, in _conv_forward\n    return F.conv2d(input, weight, self.bias, self.stride,\n', 'RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\n']
    
    

    Expected behavior

    A working pose estimation.

    Environment (please complete the following information, where applicable):

    • Unity Version: Unity 2020.2.7f1: The demo project was 2020.2.6f1 an older version.
    • Unity machine OS + version: Ubuntu 20.04
    • ROS machine OS + version: Ubuntu 20.04, ROS Noetic
    • ROS–Unity communication: I installed the ROS environment as described in Part 0
    • Package branches or versions: Version 0.8.0-preview.3 - March 24, 2021
    opened by tensarflow 5
  • Use TGS solver

    Use TGS solver

    Proposed change(s)

    Ignore the collisions on the inner knuckles so that the TGS solver will work.

    Fix a bug related to Ubuntu package installation when building the docker image. [Issue]

    Types of change(s)

    • [x] Bug fix
    • [ ] New feature
    • [ ] Code refactor
    • [ ] Documentation update
    • [x] Other: enable to use TGS solver

    Testing and Verification

    Tested the Pose Estimation Quick Demo with the TGS solver

    Test Configuration:

    • Unity Version: Unity 2020.2.6f1

    https://user-images.githubusercontent.com/56408141/120538136-f8b62a80-c39a-11eb-854d-b00a9acfc77e.mov

    Checklist

    • [x] Ensured this PR is up-to-date with the target branch
    • [x] Followed the style guidelines as described in the Contribution Guidelines
    • [x] Added tests that prove my fix is effective or that my feature works
    • [x] Updated the Changelog and described changes in the Unreleased section
    • [x] Updated the documentation as appropriate

    Other comments

    opened by peifeng-unity 5
  • Could NOT find ros_tcp_endpoint

    Could NOT find ros_tcp_endpoint

    In Pick-and-Place with Object Pose Estimation: Quick Demo, Set Up the ROS Side, Step2. use "docker build -t unity-robotics:pose-estimation -f docker/Dockerfile ." and show error. What should I do? Thanks!

    E:\UnityProjects\2020\Robotics-Object-Pose-Estimation>docker build -t unity-robotics:pose-estimation -f docker/Dockerfile . [+] Building 14.2s (17/18) => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 1.41kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ros:noetic-ros-base 5.1s => [internal] load build context 2.0s => => transferring context: 110.51MB 1.9s => [ 1/14] FROM docker.io/library/ros:noetic-ros-base@sha256:68085c6624824d5ad276450d21377d34dccdc75785707f244a9 0.0s => CACHED [ 2/14] RUN sudo apt-get update && sudo apt-get install -y vim iputils-ping net-tools python3-pip ros- 0.0s => CACHED [ 3/14] RUN sudo -H pip3 --no-cache-dir install rospkg numpy jsonpickle scipy easydict torch==1.7.1+cu 0.0s => CACHED [ 4/14] WORKDIR /catkin_ws 0.0s => CACHED [ 5/14] COPY ./ROS/src/moveit_msgs /catkin_ws/src/moveit_msgs 0.0s => CACHED [ 6/14] COPY ./ROS/src/robotiq /catkin_ws/src/robotiq 0.0s => CACHED [ 7/14] COPY ./ROS/src/ros_tcp_endpoint /catkin_ws/src/ros_tcp_endpoint 0.0s => CACHED [ 8/14] COPY ./ROS/src/universal_robot /catkin_ws/src/universal_robot 0.0s => [ 9/14] COPY ./ROS/src/ur3_moveit /catkin_ws/src/ur3_moveit 1.1s => [10/14] COPY ./docker/set-up-workspace /setup.sh 0.1s => [11/14] COPY docker/tutorial / 0.1s => [12/14] RUN /bin/bash -c "find /catkin_ws -type f -print0 | xargs -0 dos2unix" 1.0s => ERROR [13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh 4.8s

    [13/14] RUN dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh: #17 0.402 dos2unix: converting file /tutorial to Unix format... #17 0.406 dos2unix: converting file /setup.sh to Unix format... #17 1.304 -- The C compiler identification is GNU 9.3.0 #17 1.548 -- The CXX compiler identification is GNU 9.3.0 #17 1.567 -- Check for working C compiler: /usr/bin/cc #17 1.694 -- Check for working C compiler: /usr/bin/cc -- works #17 1.696 -- Detecting C compiler ABI info #17 1.779 -- Detecting C compiler ABI info - done #17 1.799 -- Detecting C compile features #17 1.800 -- Detecting C compile features - done #17 1.806 -- Check for working CXX compiler: /usr/bin/c++ #17 1.895 -- Check for working CXX compiler: /usr/bin/c++ -- works #17 1.897 -- Detecting CXX compiler ABI info #17 1.987 -- Detecting CXX compiler ABI info - done #17 2.007 -- Detecting CXX compile features #17 2.008 -- Detecting CXX compile features - done #17 2.376 -- Using CATKIN_DEVEL_PREFIX: /catkin_ws/devel #17 2.377 -- Using CMAKE_PREFIX_PATH: /opt/ros/noetic #17 2.377 -- This workspace overlays: /opt/ros/noetic #17 2.408 -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.5", minimum required is "3") #17 2.409 -- Using PYTHON_EXECUTABLE: /usr/bin/python3 #17 2.409 -- Using Debian Python package layout #17 2.447 -- Found PY_em: /usr/lib/python3/dist-packages/em.py #17 2.447 -- Using empy: /usr/lib/python3/dist-packages/em.py #17 2.585 -- Using CATKIN_ENABLE_TESTING: ON #17 2.585 -- Call enable_testing() #17 2.588 -- Using CATKIN_TEST_RESULTS_DIR: /catkin_ws/build/test_results #17 3.003 -- Forcing gtest/gmock from source, though one was otherwise available. #17 3.003 -- Found gtest sources under '/usr/src/googletest': gtests will be built #17 3.003 -- Found gmock sources under '/usr/src/googletest': gmock will be built #17 3.033 -- Found PythonInterp: /usr/bin/python3 (found version "3.8.5") #17 3.036 -- Found Threads: TRUE #17 3.052 -- Using Python nosetests: /usr/bin/nosetests3 #17 3.119 -- catkin 0.8.9 #17 3.119 -- BUILD_SHARED_LIBS is on #17 3.289 -- BUILD_SHARED_LIBS is on #17 3.289 -- Using CATKIN_WHITELIST_PACKAGES: moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo #17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #17 4.211 -- ~~ traversing 1 packages in topological order: #17 4.211 -- ~~ - ur3_moveit #17 4.211 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #17 4.212 -- +++ processing catkin package: 'ur3_moveit' #17 4.212 -- ==> add_subdirectory(ur3_moveit) #17 4.771 -- Could NOT find ros_tcp_endpoint (missing: ros_tcp_endpoint_DIR) #17 4.771 -- Could not find the required component 'ros_tcp_endpoint'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. #17 4.771 CMake Error at /opt/ros/noetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): #17 4.771 Could not find a package configuration file provided by "ros_tcp_endpoint" #17 4.771 with any of the following names: #17 4.771 #17 4.771 ros_tcp_endpointConfig.cmake #17 4.771 ros_tcp_endpoint-config.cmake #17 4.771 #17 4.771 Add the installation prefix of "ros_tcp_endpoint" to CMAKE_PREFIX_PATH or #17 4.771 set "ros_tcp_endpoint_DIR" to a directory containing one of the above #17 4.771 files. If "ros_tcp_endpoint" provides a separate development package or #17 4.771 SDK, be sure it has been installed. #17 4.771 Call Stack (most recent call first): #17 4.771 ur3_moveit/CMakeLists.txt:13 (find_package) #17 4.771 #17 4.772 #17 4.775 -- Configuring incomplete, errors occurred! #17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeOutput.log". #17 4.775 See also "/catkin_ws/build/CMakeFiles/CMakeError.log". #17 4.782 Base path: /catkin_ws #17 4.782 Source space: /catkin_ws/src #17 4.782 Build space: /catkin_ws/build #17 4.782 Devel space: /catkin_ws/devel #17 4.782 Install space: /catkin_ws/install #17 4.782 Creating symlink "/catkin_ws/src/CMakeLists.txt" pointing to "/opt/ros/noetic/share/catkin/cmake/toplevel.cmake" #17 4.782 #### #17 4.782 #### Running command: "cmake /catkin_ws/src -DCATKIN_WHITELIST_PACKAGES=moveit_msgs;ros_tcp_endpoint;ur3_moveit;robotiq_2f_140_gripper_visualization;ur_description;ur_gazebo -DCATKIN_DEVEL_PREFIX=/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/catkin_ws/install -G Unix Makefiles" in "/catkin_ws/build" #17 4.782 #### #17 4.782 Invoking "cmake" failed


    executor failed running [/bin/sh -c dos2unix /tutorial && dos2unix /setup.sh && chmod +x /setup.sh && /setup.sh && rm /setup.sh]: exit code: 1

    opened by JoSharon 5
  • System.Net.SocketException: Address already in use

    System.Net.SocketException: Address already in use

    Hello Team,

    I'm getting the System.Net.SocketException: Address already in use error from the Unity console.

    Troubleshooting workaround by leaving the Override Unity IP Address blank and Change the ROS IP Address to the IP of your Docker container didn't fix the error.

    Docker IP Configuration,

    root@d283c453367f:/catkin_ws# ifconfig 
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.0.3  netmask 255.255.0.0  broadcast 172.17.255.255
            ether 02:42:ac:11:00:03  txqueuelen 0  (Ethernet)
            RX packets 179  bytes 24664 (24.6 KB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 61  bytes 4008 (4.0 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 53259  bytes 14479754 (14.4 MB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 53259  bytes 14479754 (14.4 MB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    

    Unity_IP_Configuration

    Regards, Jegathesan S

    opened by nullbyte91 5
  • Cube not rotating

    Cube not rotating

    Hello, thank you for the very beneficial tutorial, I'm currently going through it. In part 2, I've followed the tutorial up to step 10 without errors. In step 10, the cube is overlaid with a green bounding box, however, it is not rotating. Any idea what could be the problem? I'm using Unity 2020.2.0f1

    The following is a screenshot of my editor. Screen Shot 2021-03-02 at 10 18 32 AM

    and if i continue to step 11, same thing, the box moves to a place and then stops moving, like the photo attached:

    Screen Shot 2021-03-02 at 10 37 34 AM
    opened by ZahraaBass 5
  • Error: arm/arm: Unable to sample any valid states for goal tree

    Error: arm/arm: Unable to sample any valid states for goal tree

    Hello there, I am trying to build robotics-object-pose-estimation project in my local machine but after I am running ROS server and try to click on pose estimation button in unity it return error "Error: arm/arm: Unable to sample any valid states for goal tree" Any help? Thanks

    Console logs / stack traces

    [ERROR] [1663850579.670529300]: arm/arm: Unable to sample any valid states for goal tree

    Screenshots

    Screenshot (1)

    Environment (please complete the following information, where applicable):

    • Unity Version: [e.g. Unity 2021.3.9f1]
    • Unity machine OS + version: [e.g. Windows 11]
    • ROS machine OS + version: [e.g. Ubuntu 18.04, ROS Noetic]
    • ROS–Unity communication: [e.g. Docker]
    • Package branches or versions: [e.g. [email protected]]
    opened by waedbara 4
  • ROS failed when I changed the camera rotation

    ROS failed when I changed the camera rotation

    Describe the bug MicrosoftTeams-image

    To Reproduce Steps to reproduce the behavior: Just change camera rotation into this, defaul value is 20 MicrosoftTeams-image (1)

    Additional context

    Idk why it's work fine with default camera but when I change its rotation, it's failed.

    opened by BaoLocPham 0
  • Problems when building docker image

    Problems when building docker image

    I am getting this error when building the docker image. Both on windows or ubuntu. I am attaching the screenshot of the error. I have followed all the steps.

    Screenshot 2022-11-20 at 11 12 52 AM

    any suggestion on how to solve this issue?

    opened by dipinoch 0
  • A lot pick up erros

    A lot pick up erros

    Hi,

    In my build the robot almost never succeeds in picking up the cube. Even though I get shell msg "You can start planning" I've noticed three ERRORS in the dock workspace:

    1. [controller_spawner-3]
    2. [ERROR] [1650563249.826889700]: Could not find the planner configuration 'None' on the param server
    3. [ERROR] [1650563266.917313200]: Action client not connected: /follow_joint_trajectory

    Are any of these possibly related?

    Thank you very much for your time.

    opened by andrecavalcante 1
  • The Cube label for data collection is misplaced in a weird way

    The Cube label for data collection is misplaced in a weird way

    Describe the bug

    The Cube label is misplaced in a weird way.

    To Reproduce

    Steps to reproduce the behavior:

    Just running a Demo project with the Perception camera turned on (was trying to collect images for model training).

    Screenshots

    Screenshot 2022-01-19 at 02 00 33

    Environment:

    • Unity Version: e.g. Unity 2020.2.6f1 (As suggested)
    • Unity machine OS + version: MacOS 12.1
    • ROS machine OS + version: As suggested
    • ROS–Unity communication: Docker
    • Package branches or versions: As suggested
    stale 
    opened by nkdchck 5
Releases(v0.0.1)
Owner
Unity Technologies
Unity Technologies
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

null 17 Dec 19, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 9, 2023
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

null 10 Dec 14, 2022
Demonstration of the Model Training as a CI/CD System in Vertex AI

Model Training as a CI/CD System This project demonstrates the machine model training as a CI/CD system in GCP platform. You will see more detailed wo

Chansung Park 19 Dec 28, 2022
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 6, 2023
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary.

CUP-DNN CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary. The model was trained on the expre

null 1 Oct 27, 2021
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
Task-based end-to-end model learning in stochastic optimization

Task-based End-to-end Model Learning in Stochastic Optimization This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains th

CMU Locus Lab 164 Dec 29, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 5, 2022
HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events globally on daily to subseasonal timescales.

HeatNet HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events glob

Google Research 6 Jul 7, 2022
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)

Rugby score prediction An end-to-end machine learning web app to predict rugby scores Overview An demo project to provide a high-level overview of the

null 34 May 24, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

null 78 Dec 27, 2022
A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku.

Automatic_Background_Remover A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku. ?? https:

Gaurav 16 Oct 29, 2022
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

null 6 Jun 27, 2022
A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

null 2 Jul 25, 2022