RGB-stacking πŸ›‘ 🟩 πŸ”· for robotic manipulation

Overview

RGB-stacking πŸ›‘ 🟩 πŸ”· for robotic manipulation

BLOG | PAPER | VIDEO

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes,
Alex X. Lee*, Coline Devin*, Yuxiang Zhou*, Thomas Lampe*, Konstantinos Bousmalis*, Jost Tobias Springenberg*, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, Francesco Nori.
In Conference on Robot Learning (CoRL), 2021.

The RGB environment

This repository contains an implementation of the simulation environment described in the paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes". Note that this is a re-implementation of the environment (to remove dependencies on internal libraries). As a result, not all the features described in the paper are available at this point. Noticeably, domain randomization is not included in this release. We also aim to provide reference performance metrics of trained policies on this environment in the near future.

In this environment, the agent controls a robot arm with a parallel gripper above a basket, which contains three objects β€” one red, one green, and one blue, hence the name RGB. The agent's task is to stack the red object on top of the blue object, within 20 seconds, while the green object serves as an obstacle and distraction. The agent controls the robot using a 4D Cartesian controller. The controlled DOFs are x, y, z and rotation around the z axis. The simulation is a MuJoCo environment built using the Modular Manipulation (MoMa) framework.

Corresponding method

The RGB-stacking paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes" also contains a description and thorough evaluation of our initial solution to both the 'Skill Mastery' (training on the 5 designated test triplets and evaluating on them) and the 'Skill Generalization' (training on triplets of training objects and evaluating on the 5 test triplets). Our approach was to first train a state-based policy in simulation via a standard RL algorithm (we used MPO) followed by interactive distillation of the state-based policy into a vision-based policy (using a domain randomized version of the environment) that we then deployed to the robot via zero-shot sim-to-real transfer. We finally improved the policy further via offline RL based on data collected from the sim-to-real policy (we used CRR). For details on our method and the results please consult the paper.

Installing and visualizing the environment

Please ensure that you have a working MuJoCo200 installation and a valid MuJoCo licence.

  1. Clone this repository:

    git clone https://github.com/deepmind/rgb_stacking.git
    cd rgb_stacking
  2. Prepare a Python 3 environment - venv is recommended.

    python3 -m venv rgb_stacking_venv
    source rgb_stacking_venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Run the environment viewer:

    python -m rgb_stacking.main

Step 2-4 can also be done by running the run.sh script:

./run.sh

Specifying the object triplet

The default environment will load with Triplet 4 (see Sect. 3.2.1 in the paper). If you wish to use a different triplet you can use the following commands:

from rgb_stacking import environment

env = environment.rgb_stacking(object_triplet=NAME_OF_SET)

The possible NAME_OF_SET are:

  • rgb_test_triplet{i} where i is one of 1, 2, 3, 4, 5: Loads test triplet i.
  • rgb_test_random: Randomly loads one of the 5 test triplets.
  • rgb_train_random: Triplet comprised of blocks from the training set.
  • rgb_heldout_random: Triplet comprised of blocks from the held-out set.

For more information on the blocks and the possible options, please refer to the rgb_objects repository.

Specifying the observation space

By default, the observations exposed by the environment are only the ones we used for training our state-based agents. To use another set of observations please use the following code snippet:

from rgb_stacking import environment

env = environment.rgb_stacking(
    observations=environment.ObservationSet.CHOSEN_SET)

The possible CHOSEN_SET are:

  • STATE_ONLY: Only the state observations, used for training expert policies from state in simulation (stage 1).
  • VISION_ONLY: Only image observations.
  • ALL: All observations.
  • INTERACTIVE_IMITATION_LEARNING: Pair of image observations and a subset of proprioception observations, used for interactive imitation learning (stage 2).
  • OFFLINE_POLICY_IMPROVEMENT: Pair of image observations and a subset of proprioception observations, used for the one-step offline policy improvement (stage 3).

Real RGB-Stacking Environment: CAD models and assembly instructions

The CAD model of the setup is available in onshape.

We also provide the following documents for the assembly of the real cell:

  • Assembly instructions for the basket.
  • Assembly instructions for the robot.
  • Assembly instructions for the cell.
  • The bill of materials of all the necessary parts.
  • A diagram with the wiring of cell.

The RGB-objects themselves can be 3D-printed using the STLs available in the rgb_objects repository.

Citing

If you use rgb_stacking in your work, please cite the accompanying paper:

@inproceedings{lee2021rgbstacking,
    title={Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes},
    author={Alex X. Lee and
            Coline Devin and
            Yuxiang Zhou and
            Thomas Lampe and
            Konstantinos Bousmalis and
            Jost Tobias Springenberg and
            Arunkumar Byravan and
            Abbas Abdolmaleki and
            Nimrod Gileadi and
            David Khosid and
            Claudio Fantacci and
            Jose Enrique Chen and
            Akhil Raju and
            Rae Jeong and
            Michael Neunert and
            Antoine Laurens and
            Stefano Saliceti and
            Federico Casarini and
            Martin Riedmiller and
            Raia Hadsell and
            Francesco Nori},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2021},
    url={https://openreview.net/forum?id=U0Q8CrtBJxJ}
}
Comments
  • Looking for information on expected speed / RAM usage

    Looking for information on expected speed / RAM usage

    Hi,

    Apologies if I have missed something obvious.

    When I run the rgb_stacking environment for 300 steps with STATE_ONLY observations (and fixed action) it seems to take usually well over 20x longer than running 300 steps (with fixed action) using the Meta-World benchmark environment, which also uses a Sawyer arm with the MuJoCo simulator to do pick and place (and other) tasks.

    rgb_stacking is obviously a more complicated environment/simulation but this seemed like a lot so I wanted to check if this is about the slowdown you would expect compared to other MuJoCo sawyer arm simulations (given the greater complexity) or whether this looks like an issue with my implementation?

    On my machine the rgb_stacking environment also uses up much more RAM c.0.75GB for every instance - is this also what you would expect?

    Couple of other questions:

    1. Does c.0.2-0.3 seconds per rgb_stacking env simulation step sound about right or is that very slow?
    2. Whether there are any settings that might be worth trying to speed up the simulation?

    Any help would be really appreciated.

    Example Execution Times 300 steps (code below):

    rgb_stacking: real 1m13.886s user 1m42.632s

    meta-world: real 0m1.571s user 0m4.835s

    rgb_stacking test code:

    
    from absl import app
    import numpy as np
    from rgb_stacking import environment
    
        
    def test_run(argv):
     	env = environment.rgb_stacking(object_triplet='rgb_test_random', observation_set=environment.ObservationSet.STATE_ONLY)
     	step, reward, discount, obs = env.reset()
     	for x in range(300):
    	 	step, reward, discount, obs = env.step(np.array([-0.01, 0.01, 0.03, 0.5, 100]))
     	env.close()
     	
    
    if __name__ == '__main__':
    	app.run(test_run)
    

    meta-world test code:

    import metaworld
    import numpy as np
    
    
    if __name__ == '__main__':
    	task_name = 'push-v2'
    	meta_world = metaworld.ML1(task_name, seed=0)
    	env = meta_world.train_classes[task_name]() 
    	task = meta_world.train_tasks[0] 
    	env.set_task(task)
    	obs = env.reset()
    	for x in range(300):
    		obs, env_reward, done, info = env.step(np.array([0.1, -0.1, 0.2, 0.01]))
    	env.close()
    
    opened by joshua-a-harris 5
  • Extract action space info

    Extract action space info

    Hi, my peers and I are really confused about extracting the action space information from the environment i.e. env = environment.rgb_stacking(observation_set=environment.ObservationSet.VISION_ONLY, object_triplet='rgb_test_triplet1'). Can you provide any guidance on that? env.action_space is not the right way to do it obviously.

    opened by alfaevc 1
  • Issues with latest version of dm_control

    Issues with latest version of dm_control

    Problem The latest version of dm_control does not seem compatible with this project. When running the provided bash script, the program crashes.

    Fix (temporary) We managed to temporarily fix this issue by downgrading to dm-control==0.0.364896371 in requirements.txt.

    Stdout:

    [/tmpfs/src/git/dm_robotics-kokoro/cpp/support/include/dm_robotics/support/logging.h:
    Fatal Python error: Aborted
    
    Current thread 0x00007f7c42bab740 (most recent call first):
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_robotics/moma/effectors/cartesian_6d_velocity_effector.py", line 363 in after_compile
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_robotics/moma/effectors/cartesian_4d_velocity_effector.py", line 150 in after_compile
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_robotics/moma/effectors/constrained_actions_effectors.py", line 110 in after_compile
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_robotics/moma/base_task.py", line 200 in after_compile
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_control/composer/environment.py", line 118 in after_compile
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_control/composer/environment.py", line 241 in _recompile_physics_and_update_observables
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_control/composer/environment.py", line 221 in __init__
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_control/composer/environment.py", line 324 in __init__
      File "rob_project/rgb_env/lib/python3.9/site-packages/dm_robotics/moma/subtask_env_builder.py", line 83 in build_base_env
      File "rob_project/rgb_stacking/environment.py", line 161 in rgb_stacking
      File "rob_project/rgb_stacking/main.py", line 30 in main
      File "rob_project/rgb_env/lib/python3.9/site-packages/absl/app.py", line 250 in _run_main
      File "rob_project/rgb_env/lib/python3.9/site-packages/absl/app.py", line 299 in run
      File "rob_project/rgb_stacking/main.py", line 37 in <module>
      File "/usr/lib/python3.9/runpy.py", line 87 in _run_code
      File "/usr/lib/python3.9/runpy.py", line 197 in _run_module_as_main
    [1]    47081 abort (core dumped)  python3 -m rgb_stacking.main
    

    Linux version: Linux manjaro 5.14.10-1-MANJARO #1 SMP PREEMPT Thu Oct 7 06:43:34 UTC 2021 x86_64 GNU/Linux Python version: 3.9.7

    opened by vandrw 1
  • Mujoco not found

    Mujoco not found

    Hi, can you please advice against this import error

    Traceback (most recent call last): File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/rishabh/workspace/rgb_stacking/rgb_stacking/main.py", line 116, in app.run(main) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "/home/rishabh/workspace/rgb_stacking/rgb_stacking/main.py", line 85, in main with environment.rgb_stacking(object_triplet=_OBJECT_TRIPLET.value) as env: File "/home/rishabh/workspace/rgb_stacking/rgb_stacking/environment.py", line 169, in rgb_stacking task_env = env_builder.build_base_env() File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_robotics/moma/subtask_env_builder.py", line 83, in build_base_env self._base_env = composer.Environment(self._task, File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_control/composer/environment.py", line 298, in init super().init( File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_control/composer/environment.py", line 203, in init self._recompile_physics_and_update_observables() File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_control/composer/environment.py", line 223, in _recompile_physics_and_update_observables self._hooks.after_compile(self._physics_proxy, self._random_state) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_control/composer/environment.py", line 112, in after_compile self._task.after_compile(physics, random_state) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_robotics/moma/base_task.py", line 200, in after_compile ef.after_compile(self.root_entity.mjcf_model.root) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_robotics/moma/effectors/constrained_actions_effectors.py", line 110, in after_compile self._delegate.after_compile(mjcf_model) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_robotics/moma/effectors/cartesian_4d_velocity_effector.py", line 150, in after_compile self._effector_6d.after_compile(mjcf_model) File "/home/rishabh/anaconda3/envs/rgb_stacking/lib/python3.9/site-packages/dm_robotics/moma/effectors/cartesian_6d_velocity_effector.py", line 363, in after_compile qp_params = _CartesianVelocityMapperParams() ModuleNotFoundError: No module named 'mujoco'

    opened by jangirrishabh 0
  • no module named mujoco

    no module named mujoco

    hi ~ thanks for your sharing. When I was using this repo, I got a strange error

    ''' Traceback (most recent call last): File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lwt/github_test/rgb_stacking/rgb_stacking/main.py", line 116, in app.run(main) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "/home/lwt/github_test/rgb_stacking/rgb_stacking/main.py", line 85, in main with environment.rgb_stacking(object_triplet=_OBJECT_TRIPLET.value) as env: File "/home/lwt/github_test/rgb_stacking/rgb_stacking/environment.py", line 169, in rgb_stacking task_env = env_builder.build_base_env() File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_robotics/moma/subtask_env_builder.py", line 84, in build_base_env strip_singleton_obs_buffer_dim=True) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_control/composer/environment.py", line 304, in init strip_singleton_obs_buffer_dim=strip_singleton_obs_buffer_dim) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_control/composer/environment.py", line 203, in init self._recompile_physics_and_update_observables() File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_control/composer/environment.py", line 223, in _recompile_physics_and_update_observables self._hooks.after_compile(self._physics_proxy, self._random_state) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_control/composer/environment.py", line 112, in after_compile self._task.after_compile(physics, random_state) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_robotics/moma/base_task.py", line 200, in after_compile ef.after_compile(self.root_entity.mjcf_model.root) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_robotics/moma/effectors/constrained_actions_effectors.py", line 110, in after_compile self._delegate.after_compile(mjcf_model) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_robotics/moma/effectors/cartesian_4d_velocity_effector.py", line 150, in after_compile self._effector_6d.after_compile(mjcf_model) File "/home/lwt/anaconda3/envs/rgb/lib/python3.7/site-packages/dm_robotics/moma/effectors/cartesian_6d_velocity_effector.py", line 363, in after_compile qp_params = _CartesianVelocityMapperParams() ModuleNotFoundError: No module named 'mujoco' ''' I'm pretty sure mujoco is installed correctly ''' export MUJOCO_GL="glfw" export MJLIB_PATH=$HOME/.mujoco/mujoco200/bin/libmujoco200.so export MJKEY_PATH=$HOME/.mujoco/mujoco200/mjkey.txt export LD_LIBRARY_PATH=$HOME/.mujoco/mujoco200/bin:$LD_LIBRARY_PATH export MUJOCO_PY_MJPRO_PATH=$HOME/.mujoco/mujoco200/ export MUJOCO_PY_MJKEY_PATH=$HOME/.mujoco/mujoco200/mjkey.txt ''' Can you offer me some suggestions?

    opened by papercut-linkin 0
  • Setting environment seed

    Setting environment seed

    Hi,

    I was just wondering if I am missing an obvious way to set an environment seed so that the arm and rgb objects are initialised in a set position and with the same rgb object combinations / deformations for a given seed?

    If not, I ve modified a local version of the code in the following ways that seems to have the desired effect - does this look about right or might this not replicate something?:

    1. To set the seed for the arm position I pass an np.random.RandomState(seed) object instance as an additional argument (i.e random_state=) when initialising the _base_env Environment object in subtask_env_builder.py in the lines here. This seems to work to get the arm to always start in the same place for a given seed.
    2. To fix the position of the rgb objects and the type / deformation of the rgb objects I needed to make two changes:
    • I needed to add a sort (RGB_OBJECTS_TRAIN_SET.sort()) below these lines where I think the valid object combinations are initialised. Without this the order of the valid object deformations seems to be somewhat random (I think I m probably missing setting another seed somewhere - but this seems to work as a patch).
    • Then I set np.random.seed(seed) (using the same random state as used in 1.) at some point before these lines where the specific objects seem to be chosen.
    1. One thing that is still slightly confusing me is how the above seeds when the environment is initialised feed through to the env.reset()?

    Any thoughts on whether this should work / makes sense / if there is an easier way to do it would be really appreciated!

    opened by joshua-a-harris 0
  • Domain Randomization Help

    Domain Randomization Help

    Hi I am trying to implement domain randomization.

    1. Between scene_initialization and domain initialization where will be the best place to initialize the physics, mass as mentioined in the paper.

    2. How does one really initialize the friction, Can you please examine my code.

    3. How does one implement action delay, are the rewards and observations delayed too

    @dataclasses.dataclass class DomainRandomizer: def init(self, basket, props, robot):

        self.props = props
        self.basket = basket
        self.arm = robot.arm
        self.gripper = robot.gripper
    
        self.gripper_friction = Uniform([0.3, 0.1, 0.05], [0.6, 0.1, 0.005])
    
        friction, mass, low, hi = np.array([1, 0.005, 0.0001], float), 0.201, 0.9, 1.1
        self.object_rand = dict(friction=Uniform(friction * low, friction * hi),
                                mass=Uniform(mass * low, mass * hi))
    
        friction = np.array([0.1, 0.1, 0.0001], float)
        self.arm_rand = dict(friction=Uniform(friction * low, friction * hi),
                             damping=Uniform(0.1 * low, 0.1 * hi),
                             armature=Uniform(low, hi),
                             friction_loss=Uniform(0.3 * low, 0.3 * hi))
    
        friction = np.array([1, 0.005, 0.0001], float)
        self.hand_rand = dict(friction=Uniform(friction * low, friction * hi),
                              driver_damping=Uniform(0.2 * low, 0.2 * hi),
                              armature=Uniform(0.1 * low, 0.1 * hi),
                              spring_link_damping=Uniform(0.00125 * low, 0.00125 * hi))
    
        friction = np.array([1.0, 0.001, 0.001], float)
        self.basket_friction = Uniform(friction * low, friction * hi)
    
        gear_ratio = np.array([1, 0, 0, 0, 0, 0], float)
        self.actuator_gear = Uniform(gear_ratio * low, gear_ratio * hi)
    
    
    def __call__(self, random_state: np.random.RandomState) -> bool:
        for p in self.props:
            collision_geom = p.mjcf_model.find_all('geom')[1]
            collision_geom.friction = self.object_rand['friction'].sample()
    
        basket_geoms = self.basket.mjcf_model.find_all('geom')
        for b in basket_geoms:
            b.friction = self.basket_friction.sample()
    
        hand_driver = self.gripper.mjcf_model.find('default', 'driver')
        hand_spring_link = self.gripper.mjcf_model.find('default', 'spring_link')
        hand = self.gripper.mjcf_model.find('default', 'reinforced_fingertip')
    
        hand.geom.friction = self.hand_rand['friction'].sample()
        hand_driver.joint.armature = self.hand_rand['armature'].sample()
        hand_driver.joint.damping = self.hand_rand['driver_damping'].sample()
        hand_spring_link.joint.damping = self.hand_rand['spring_link_damping'].sample()
    
        for joint in self.arm.joints:
            joint.armature = self.arm_rand['armature'].sample()
            joint.damping = self.arm_rand['damping'].sample()
            joint.frictionloss = self.arm_rand['friction_loss'].sample()
    
        for actuator in self.arm.actuators:
            actuator.gear = self.actuator_gear.sample()
    
        geoms = self.arm.mjcf_model.find_all('geom')
        for g in geoms:
            g.friction = self.arm_rand['friction'].sample()
    
        return True
    
    opened by ava6969 2
Owner
DeepMind
DeepMind
ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm

ManipulaTHOR: A Framework for Visual Object Manipulation Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha

AI2 65 Dec 30, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

null 246 Dec 11, 2022
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав ΠœΠΎΠ»ΠΎΠ΄Ρ†ΠΎΠ² 0 Feb 6, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
Library for machine learning stacking generalization.

stacked_generalization Implemented machine learning *stacking technic[1]* as handy library in Python. Feature weighted linear stacking is also availab

null 114 Jul 19, 2022
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking

StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking Datasets You can download datasets that have been pre-pr

null 25 May 29, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

null 0 Jan 16, 2022
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH ZΓΌrich 181 Dec 29, 2022
Doosan robotic arm, simulation, control, visualization in Gazebo and ROS2 for Reinforcement Learning.

Robotic Arm Simulation in ROS2 and Gazebo General Overview This repository includes: First, how to simulate a 6DoF Robotic Arm from scratch using GAZE

David Valencia 12 Jan 2, 2023
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022
Using some basic methods to show linkages and transformations of robotic arms

roboticArmVisualizer Python GUI application to create custom linkages and adjust joint angles. In the future, I plan to add 2d inverse kinematics solv

Sandesh Banskota 1 Nov 19, 2021
Control-Robot-Arm-using-PS4-Controller - A Robotic Arm based on Raspberry Pi and Arduino that controlled by PS4 Controller

Control-Robot-Arm-using-PS4-Controller You can see all details about this Robot

MohammadReza Sharifi 5 Jan 1, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 5, 2022
Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

null 0 Feb 14, 2022
A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Alfred 1 Jun 5, 2022
Building Ellee β€” A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors, I built Elleeβ€Š-β€Ša robotic teddy bear who can move her head and converse naturally.

null 24 Oct 26, 2022