Libraries, tools and tasks created and used at DeepMind Robotics.

Overview

DeepMind Robotics

Libraries, tools and tasks created and used at DeepMind Robotics.

Package overview

Package Summary
Transformations Rigid body transformations
Geometry Scene and Robot geometry primitives
Vision Visual blob detection and tracking
AgentFlow Reinforcement Learning agent composition library
Manipulation "RGB" object meshes for manipulation tasks
MoMa Manipulation environment definition library, for simulated and real robots
Controllers QP-optimization based cartesian controller
Controller Bindings Python bindings for the controller
Least Squares QP QP task definition and solver

Installation

These libraries are distributed on PyPI, the packages are:

  • dm_robotics-transformations
  • dm_robotics-geometry
  • dm_robotics-vision
  • dm_robotics-agentflow
  • dm_robotics-manipulation
  • dm_robotics-moma
  • dm_robotics-controllers

Dependencies

MoMa, Manipulation and Controllers depend on MuJoCo, the other packages do not. See the individual packages for more information on their dependencies.

Building

To build and test the libraries, run build.sh. This script assumes:

  • MuJoCo is installed and licensed.
  • dm_control is installed.
  • cmake version >= 3.20.2 is installed.
  • Python 3.6 ,3.7 or 3.8 and system headers are installed.
  • GCC version 9 or later is installed.
  • numpy is installed.

The Python libraries are tested with tox, the C++ code is built and tested with cmake.

Tox's distshare mechanism is used to share the built source distribution packages between the packages.

Comments
  • Training specific agentflow subtaskoption

    Training specific agentflow subtaskoption

    Is their an example showing how to train a specific option policy? For example, from the agent flow tutorial, how can we setup training for the ExamplePolicy. The problem being that the output of step in the main_loop may have a different observation_spec and action_spec than the policy.

    # Stubs for pulling observation and sending action to some external system.
    observation_cb = ExampleObservationUpdater()
    action_cb = ExampleActionSender()
    
    # Create an environment that forwards the observation and action calls.
    env = ProxyEnvironment(observation_cb, action_cb)
    
    # Stub policy that runs the desired agent.
    policy = ExamplePolicy(action_cb.action_spec(), "agent")
    
    # Wrap policy into an agent that logs to the terminal.
    task = ExampleSubTask(env.observation_spec(), action_cb.action_spec(), 10)
    logger = print_logger.PrintLogger()
    aggregator = subtask_logger.EpisodeReturnAggregator()
    logging_observer = subtask_logger.SubTaskLogger(logger, aggregator)
    agent = subtask.SubTaskOption(task, policy, [logging_observer])
    
    reset_op = ExampleScriptedOption(action_cb.action_spec(), "reset", 3)
    main_loop = loop_ops.Repeat(5, sequence.Sequence([reset_op, agent]))
    
    # Run the episode.
    timestep = env.reset()
    while True:
      action = main_loop.step(timestep)
      timestep = env.step(action)
    
      # Terminate if the environment or main_loop requests it.
      if timestep.last() or (main_loop.pterm(timestep) > np.random.rand()):
        if not timestep.last():
          termination_timestep = timestep._replace(step_type=dm_env.StepType.LAST)
          main_loop.step(termination_timestep)
        break
    
    opened by jangirrishabh 9
  • Update PIP subpackages for MuJoCo 2.1.1

    Update PIP subpackages for MuJoCo 2.1.1

    Hi!

    Big fan of the subpackages in this repo such as dm_robotics-transformations. Would it be possible to upload binaries that support the latest MuJoCo version? I can't use them at the moment because they require MuJoCo 210.

    Cheers!

    opened by kevinzakka 5
  • Installation: Could not find header files

    Installation: Could not find header files

    Hi, the dm_robotics installed is unable to find the header files. Any help would be greatly appreciated!

    ERROR: ' In file included from /home/rishabh/workspace/dm_robotics/cpp/mujoco/src/mjlib.cc:15: /home/rishabh/workspace/dm_robotics/cpp/mujoco/include/dm_robotics/mujoco/mjlib.h:22:10: fatal error: mjmodel.h: No such file or directory 22 | #include "mjmodel.h" //NOLINT '

    Here's my environment variables

    ' MJLIB_PATH: /home/rishabh/.mujoco/mujoco/lib/libmujoco.so Using cmake command 'cmake' Using python command '/home/rishabh/anaconda3/envs/packing/bin/python' Using tox command '/home/rishabh/anaconda3/envs/packing/bin/python -m tox' Using python version '3.8' '

    opened by jangirrishabh 3
  • IK control frame

    IK control frame

    Thank you for sharing this great codebase!

    I have a question regarding the control frame in sawyer.

    What is the reason why cartesian 6d control is expressed in the world's orientation? https://github.com/deepmind/dm_robotics/blob/667a0776fbbe217867fc65d31a3de6749bbf3d54/py/moma/effectors/cartesian_6d_velocity_effector.py#L380

    Is there a simple way to modify it to express velocity targets in the local orientation frame?

    opened by ikostrikov 3
  • Small update to wording

    Small update to wording

    This pull request is just to improve the wording. The use of "manipulate" was misleading in this context. Replaced with different wording according to the context.

    opened by iron76 3
  • Trying to perform domain randomization, Cant figure out how to update the friction for arm

    Trying to perform domain randomization, Cant figure out how to update the friction for arm

    This is my current codebase. I add it together with the other initializers

    import dataclasses

    from dm_control import mjcf

    from rgb_stacking.utils.dr.noise import NoiseDistribution, Normal, LogUniform, Uniform import numpy as np from dm_robotics.moma.entity_initializer import base_initializer

    @dataclasses.dataclass class DomainRandomizer(base_initializer.Initializer): def init(self, basket, props, robot): self.props = props self.basket = basket self.arm = robot.arm self.gripper = robot.gripper

        friction, mass, low, hi = np.array([0.9, 0.001, 0.001], float), 0.201, 0.9, 1.1
        self.object_rand = dict(friction=Uniform(friction * low, friction * hi),
                                mass=Uniform(mass * low, mass * hi))
    
        friction = np.array([0.1, 0.1, 0.0001], float)
        self.arm_rand = dict(friction=Uniform(friction * low, friction * hi),
                             damping=Uniform(0.1 * low, 0.1 * hi),
                             armature=Uniform(low, hi),
                             friction_loss=Uniform(0.3 * low, 0.3 * hi))
    
        friction = np.array([1, 0.005, 0.0001], float)
        self.hand_rand = dict(friction=Uniform(friction * low, friction * hi),
                              driver_damping=Uniform(0.1 * low, 0.1 * hi),
                              armature=Uniform(low, hi),
                              spring_link_damping=Uniform(0.3 * low, 0.3 * hi))
    
        friction = np.array([1.0, 0.001, 0.001], float)
        self.basket_friction = Uniform(friction * low, friction * hi)
    
        friction = np.array([1, 0, 0, 0, 0, 0], float)
        self.actuator_gear = Uniform(friction * low, friction * hi)
    
    def __call__(self, physics: mjcf.Physics, random_state: np.random.RandomState) -> bool:
        for p in self.props:
            collision_geom = p.mjcf_model.find_all('geom')[1]
            collision_geom.friction = self.object_rand['friction'].sample()
    
        basket_geoms = self.basket.mjcf_model.find_all('geom')
        for b in basket_geoms:
            b.friction = self.basket_friction.sample()
    
        hand_driver = self.gripper.mjcf_model.find('default', 'driver')
        hand_spring_link = self.gripper.mjcf_model.find('default', 'spring_link')
        hand = self.gripper.mjcf_model.find('default', 'reinforced_fingertip')
    
        hand.geom.friction = self.hand_rand['friction'].sample()
        hand_driver.joint.armature = self.hand_rand['armature'].sample()
        hand_driver.joint.damping = self.hand_rand['driver_damping'].sample()
        hand_spring_link.joint.damping = self.hand_rand['spring_link_damping'].sample()
    
        for joint in self.arm.joints:
            joint.armature = self.arm_rand['armature'].sample()
            joint.damping = self.arm_rand['damping'].sample()
            joint.frictionloss = self.arm_rand['friction_loss'].sample()
    
        for actuator in self.arm.actuators:
            actuator.gear = self.actuator_gear.sample()
    
        geoms = self.arm.mjcf_model.find_all('geom')
        for g in geoms:
            g.friction = self.arm_rand['friction'].sample()
    
        return True
    
    opened by ava6969 2
  • UnparsedFlagAccessError when calling env.reset()

    UnparsedFlagAccessError when calling env.reset()

    Hi,

    I keep getting this weird error when calling env.reset(). I only get this error when running a Jupyter notebook

    ~/miniconda3/lib/python3.9/site-packages/dm_robotics/agentflow/spec_utils.py in debugging_flag() 45 46 def debugging_flag() -> bool: ---> 47 return FLAGS.debug_specs 48 49

    Error Message: UnparsedFlagAccessError: Trying to access flag --debug_specs before flags were parsed.

    Defining the absl flag by myself does not help: DuplicateFlagError: The flag 'debug_specs' is defined twice. First from dm_robotics.agentflow.spec_utils, Second from /home/ztan/miniconda3/lib/python3.9/site-packages/ipykernel_launcher.py. Description from first occurrence: Debugging switch for checking values match specs.

    Any ideas why this keeps happening? Thanks a lot

    opened by zhenbangt 2
  • moma.utils.mujoco_rendering calls undefined method

    moma.utils.mujoco_rendering calls undefined method

    This line tries to call the update method of dm_control.viewer.gui.glfw_gui.GlfwWindow which doesn't exist. https://github.com/deepmind/dm_robotics/blob/48d5f0bfd76ad497faabd5823a25d89d4526b92e/py/moma/utils/mujoco_rendering.py#L146 The method would be easy to implement (a non-blocking function to render the given pixels similar to the event loop). Interestingly this is not caught by the unit test as that test uses a mock object for the viewer.

    opened by JeanElsner 1
  • Joint F/T Sensor Readings Advice

    Joint F/T Sensor Readings Advice

    Hi!

    I've been playing around with F/T sensors in MuJoCo over the last few days and was wondering if I could get some advice / explanations for some things I am observing.

    1. Is the recommendation to place F/T sensor sites right after the joint definition in the XML tree? I see that the sawyer arm does this dynamically via PyMJCF by adding a site to the parent element of the joint. However, according to the MuJoCo documentation here, it recommends "creating a dummy body welded to its parent".
    2. How come your torque readings only read every 3rd value? Wouldn't you want to project onto the joint's axis? I guess this is overly specific to the Sawyer where all the joints have a Z-axis rotation (0 0 1)?
    3. I wrote a few unit tests where I apply a force or a torque at the parent body and then check to make sure the sensor reading matches the applies force. I did this by mimicking this unit test here in dm_control for the kinova arm. Unfortunately, I don't get matching values for desired / observed forces -- the only way that happens is if I step the physics until qvel is below a threshold and then read from the sensor. Would love an explanation as to what is different in the Kinova arm that applying a torque instantly displays the reading but this won't happen for other models I've messed with.
    4. Lastly, I noticed that things like frictionloss, damping and armature affect these sensory readings. Was curious if I could get some intuition about that as well!

    Just to make things easy, I made a very simple 2-link arm and wrote a minimal example to unit test force and torque if that helps clarify my questions! You can find them here: https://gist.github.com/kevinzakka/701a1b9dea30bc675a4cf40b2af01659

    @shacklestone @alaurens Would really appreciate your help!!

    opened by kevinzakka 1
  • AttributeError: 'GlfwWindow' object has no attribute 'update'

    AttributeError: 'GlfwWindow' object has no attribute 'update'

    Installed dm_robotics-manipulation via pip, forked https://github.com/deepmind/dm_robotics/blob/main/py/moma/tasks/example_task/ and trying to execute run.py but getting a:

    AttributeError: 'GlfwWindow' object has no attribute 'update'

    Note: I have mujoco210 placed in ~/.mujoco.

    opened by kevinzakka 1
  • Update dependencies

    Update dependencies

    This pull request updates the dependencies to the currently newest versions of dm_control (1.0.9) and the corresponding MuJoCo release (2.3.1.post1). It also allows to build packages for Python 3.10, which is the system interpreter for the current Ubuntu LTS (22.04).

    cla:yes 
    opened by JeanElsner 2
  • Update plans

    Update plans

    Hi guys,

    I really like this repository (especially the MoMa) approach. The abstraction into MoMa effectors/sensors is really useful to get the hardware into the loop and I've already managed to integrate the Franka Emika robot pretty well. I'm wondering if you have any plans to integrate upstream development (dm_control/MuJoCo, Python 3.10+). Especially pip packages for the latter would be convenient as Python 3.10 is the system interpreter for Ubuntu 22.04 LTS. I haven't built the wheels for Python 3.10 myself yet, but I think it would be rather straightforward. Newer MuJoCo versions have some breaking changes I think that would require a little more leg work. If you don't have any concrete update plans I'd be happy to start looking into it, once I find the time.

    opened by JeanElsner 4
  • Dm_Control Dependencies

    Dm_Control Dependencies

    Hello, there is currently an issue when using the IK modules with dm_control==1.0.1 (error below). This is resolved by using dm_control 1.0.5. Could you please update the dependencies?

    Traceback (most recent call last): File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/zerorpc/core.py", line 153, in _async_task functor.pattern.process_call(self._context, bufchan, event, functor) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/zerorpc/patterns.py", line 30, in process_call result = functor(*req_event.args) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/zerorpc/decorators.py", line 44, in call return self._functor(*args, **kargs) File "/home/sasha/Desktop/household_robot_dataset/r2d2/franka/robot.py", line 31, in launch_robot self._ik_solver = RobotIKSolver() File "/home/sasha/Desktop/household_robot_dataset/r2d2/real_robot_ik/robot_ik_solver.py", line 13, in init self._physics = mjcf.Physics.from_mjcf_model(self._arm.mjcf_model) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mjcf/physics.py", line 495, in from_mjcf_model return cls.from_xml_string(xml_string=xml_string, assets=assets) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mujoco/engine.py", line 424, in from_xml_string return cls.from_model(model) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mujoco/engine.py", line 407, in from_model return cls(data) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mujoco/engine.py", line 122, in init self._reload_from_data(data) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mjcf/physics.py", line 530, in _reload_from_data super()._reload_from_data(data) File "/home/sasha/anaconda3/envs/poly-backup/lib/python3.8/site-packages/dm_control/mujoco/engine.py", line 370, in _reload_from_data self._warnings_before = np.empty_like(self._warnings) File "<array_function internals>", line 180, in empty_like RuntimeError: Caught an unknown exception!

    opened by AlexanderKhazatsky 2
  • Can't install dm-robotics-moma and dm-robotics-manipulation

    Can't install dm-robotics-moma and dm-robotics-manipulation

    Hi, I got this message when I attempt to install dm-robotics-moma/dm-robotics-manipulation.

    ERROR: Could not find a version that satisfies the requirement dm-robotics-controllers (from dm-robotics-moma) (from versions: none) ERROR: No matching distribution found for dm-robotics-controllers

    I think the issue comes from the fact that dm-robotics-controllers are not fully in python mode. If I drag the whole dm_robotics package to my local directory, where I also extract the controllers folder and put it in the dm_robotics folder, it still doesn't run since files such as cartesian_6d_to_joint_velocity_mapper are written in C++. Is it possible to have a complete python version of the dm_robotics package as soon as possible since I'm working on a robotic project and the deadline is really close. Thanks for your understanding!

    FYI I use macOS.

    opened by alfaevc 6
Owner
DeepMind
DeepMind
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Visual-Interaction-Networks An implementation of Deepmind visual interaction networks in Pytorch. Introduction For the purpose of understanding the ch

Mahmoud Gamal Salem 166 Dec 6, 2022
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Yuanpu Xie 91 Nov 21, 2022
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022
Implementing DeepMind's Fast Reinforcement Learning paper

Fast Reinforcement Learning This is a repo where I implement the algorithms in the paper, Fast reinforcement learning with generalized policy updates.

Marcus Chiam 6 Nov 28, 2022
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Bootstrap Your Own Latent (BYOL), in Pytorch Practical implementation of an astoundingly simple method for self-supervised learning that achieves a ne

Phil Wang 1.4k Dec 29, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch

?? Flamingo - Pytorch Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net, in Pytorch. It will include the p

Phil Wang 630 Dec 28, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022
I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform some analysis,,

Virtual-Artificial-Intelligence-genesis- I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform

AKASH M 1 Nov 5, 2021
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

null 43 Nov 21, 2022
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 51 Dec 27, 2022
YARR is Yet Another Robotics and Reinforcement learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 21 Aug 1, 2021
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 7, 2023
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

QUT Centre for Robotics (QCR) 13 Nov 26, 2022