Isaac Gym Environments for Legged Robots

Related tags

Hardware legged_gym
Overview

Isaac Gym Environments for Legged Robots

This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym. It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
Maintainer: Nikita Rudin
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: [email protected]

Useful Links

Project website: https://leggedrobotics.github.io/legged_gym/ Paper: https://arxiv.org/abs/2109.11978

Installation

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch 1.10 with cuda-11.3:
    • pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
  3. Install Isaac Gym
    • Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
    • cd isaacgym_lib/python && pip install -e .
    • Try running an example python examples/1080_balls_of_solitude.py
    • For troubleshooting check docs isaacgym/docs/index.html)
  4. Install rsl_rl (PPO implementation)
  5. Install legged_gym
    • Clone this repository
    • cd legged_gym && git checkout develop && pip install -e .

CODE STRUCTURE

  1. Each environment is defined by an env file (legged_robot.py) and a config file (legged_robot_config.py). The config file contains two classes: one conatianing all the environment parameters (LeggedRobotCfg) and one for the training parameters (LeggedRobotCfgPPo).
  2. Both env and config classes use inheritance.
  3. Each non-zero reward scale specified in cfg will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
  4. Tasks must be registered using task_registry.register(name, EnvClass, EnvConfig, TrainConfig). This is done in envs/__init__.py, but can also be done from outside of this repository.

Usage

  1. Train:
    python issacgym_anymal/scripts/train.py --task=anymal_c_flat
    • To run on CPU add following arguments: --sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).
    • To run headless (no rendering) add --headless.
    • Important: To improve performance, once the training starts press v to stop the rendering. You can then enable it later to check the progress.
    • The trained policy is saved in issacgym_anymal/logs/ / _ /model_ .pt . Where and are defined in the train config.
    • The following command line arguments override the values set in the config files:
    • --task TASK: Task name.
    • --resume: Resume training from a checkpoint
    • --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
    • --run_name RUN_NAME: Name of the run.
    • --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
    • --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
    • --num_envs NUM_ENVS: Number of environments to create.
    • --seed SEED: Random seed.
    • --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
  2. Play a trained policy:
    python issacgym_anymal/scripts/play.py --task=anymal_c_flat
    • By default the loaded policy is the last model of the last run of the experiment folder.
    • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.

Adding a new environment

The base environment legged_robot implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and no reward scales.

  1. Add a new folder to envs/ with ' _config.py , which inherit from an existing environment cfgs
  2. If adding a new robot:
    • Add the corresponding assets to resourses/.
    • In cfg set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg and the name of the environment (python class).
    • In train_cfg set experiment_name and run_name
  3. (If needed) implement your environment in .py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
  4. Register your env in isaacgym_anymal/envs/__init__.py.
  5. Modify/Tune other parameters in your cfg, cfg_train as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!

Troubleshooting

  1. If you get the following error: ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, do: sudo apt install libpython3.8

Known Issues

  1. The contact forces reported by net_contact_force_tensor are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesireable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from trhe reported forces with sensor_options.enable_forward_dynamics_forces. Example:
    sensor_pose = gymapi.Transform()
    for name in feet_names:
        sensor_options = gymapi.ForceSensorProperties()
        sensor_options.enable_forward_dynamics_forces = False # for example gravity
        sensor_options.enable_constraint_solver_forces = True # for example contacts
        sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
        index = self.gym.find_asset_rigid_body_index(robot_asset, name)
        self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
    (...)

    sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
    self.gym.refresh_force_sensor_tensor(self.sim)
    force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
    self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
    (...)

    self.gym.refresh_force_sensor_tensor(self.sim)
    contact = self.sensor_forces[:, :, 2] > 1.
Comments
  • Sim-to-real Implementation

    Sim-to-real Implementation

    Hello,

    I am wondering if you have any guideline for the sim-to-real deployment. I am planning to deploy the learned policy in the Unitree A1 Robot

    Thanks in advance!

    opened by anahrendra 3
  • Terrain and A1/Cassie only working on CPU

    Terrain and A1/Cassie only working on CPU

    OS Version: Ubuntu 21.04 Nvidia Driver: 495 Graphics: GTX 1660 Ti Pytorch: PyTorch version 1.10.1+cu102

    Hi tried anymal_c_flat and works fine on GTX 1660 Ti using nvidia-driver-495 When i try to run anymal_c_rough only works on CPU pipeline..otherwise terminal says killed. Cassie works on cpu pipline python3 train.py --task=cassie --num_envs=900 --sim_device=cpu will not let me run rl_device=cuda

    how do I get it all runnning on GPU or is my GPU not advanced enough?

    opened by sujitvasanth 3
  • Failed to train unitree aliengo robot using legged_gym

    Failed to train unitree aliengo robot using legged_gym

    In official aliengo urdf, the {FR,FL,RR,RL} thigh joints are defined as continuous. When we used this urdf to train the policy, the robot got unstable action and failed to stand up. Does legged_gym support aliengo(continuous joints) training?

    bug 
    opened by tinnerhrhe 2
  • Unable to specify GPU device on multi-GPU setup

    Unable to specify GPU device on multi-GPU setup

    Describe the bug Unable to specify the GPU device to use on multi-GPU setup

    To Reproduce Steps to reproduce the behavior:

    1. Execute python train.py --graphics_device_id=0 --task=a1
    2. On seperate terminal, execute python train.py --graphics_device_id=1 --task=a1
    3. Observe that for both terminals, selected GPU device is still cuda:0.

    Expected behavior Selected GPU device should show cuda:0 and cuda:1 on the different terminals.

    System (please complete the following information):

    • Commit: 9ddda29
    • OS: Ubuntu 18.04
    • GPU: 4x A5000
    • CUDA: 11.4
    • GPU Driver: 470.82.01
    bug 
    opened by derektan95 2
  • How are episode rewards calculated?

    How are episode rewards calculated?

    I tried to add an reward term with bias = 0.5, i.e., it should be larger than 0.5 anyway, but still the value is close to zero at the first iteration.

    opened by zita-ch 2
  • Different URDF with Official Unitree A1

    Different URDF with Official Unitree A1

    Describe the bug The URDF of A1 is different fromthe official URDF file at https://github.com/unitreerobotics/unitree_ros/blob/master/robots/a1_description/urdf/a1.urdf

    If I train a policy with your asset, can I transfer the policy to a real A1 robot?

    I try to replace the urdf file with the official one, and I got a score zero after 5000 epochs training. How can I train a reasonable policy with the official asset?

    bug 
    opened by Baichenjia 1
  • My URDF Robots getting fly away, do you have any tips

    My URDF Robots getting fly away, do you have any tips

    I'm trying to train my custom Quadruped Robot using this library So I followed that you suggest on Readme.md and it works so far.

    But after I do (I set get_args --task , default : my_custom)

    python train.py

    Simulation shows flying flying away.

    here is my collision imgaes Screenshot from 2022-02-05 22-14-53

    And this is my URDF file https://github.com/miercat0424/Custom_URDF-/blob/main/Quadruped_1.urdf

    As I think I need some inertial setting skill but I don't have any idea. Do you have any idea or tips about making URDF?

    bug 
    opened by miercat0424 1
  • There Is No develop Branch & Some Suggestions

    There Is No develop Branch & Some Suggestions

    • In the README file: cd rsl_rl && git checkout develop && pip install -e . but there is no develop branch in the rsl_rl. cd legged_gym && git checkout develop && pip install -e . same, there is no develop branch in this (legged_gym) repository
    • Besides, I think the pull request from sheim is reasonable, there are duplicate codes, packages=find_packages(), which can make pip install -e . fail. It would be great if the author/maintainer could merge the pull request.
    • Try running an example python examples/1080_balls_of_solitude.py should be changed to cd examples, and try running an example python 1080_balls_of_solitude.py
    enhancement 
    opened by Zhehui-Huang 1
  • Cannot access rsl_rl repository on BitBucket

    Cannot access rsl_rl repository on BitBucket

    Step 4 of the Installation requires access to the rsl_rl repository.

    Install rsl_rl (PPO implementation) Clone https://bitbucket.org/leggedrobotics/rsl_rl/src/master/ cd rsl_rl && git checkout develop && pip install -e .

    However navigating to the URL gives an error:

    We can't let you see this page To access this page, you may need to log in with another account. You can also return to the previous page or go back to your dashboard.

    rsl_rl is also not visible from the main Robotics Systems Lab BitBucket page. Is the repository not publically available?

    question 
    opened by mcx 1
  • Display correct actor name

    Display correct actor name

    The actor name "anymal" was hardcoded so it was always displayed no matter the task (robot) was run. This pull request changes that by specifying the name in the asset class.

    opened by xerus 0
  • RuntimeError: Error building extension 'gymtorch'

    RuntimeError: Error building extension 'gymtorch'

    I'm trying to execute scripts/train.py but it reports error like below.

    Screenshot from 2023-01-06 19-11-11

    and my pytorch version all are as your setting.

    torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113

    I have no idea what happen from this error. Can anyone help me?

    opened by TowoC 0
  • How to train the actuator network?

    How to train the actuator network?

    Thank you for the great work.

    I am wondering how can I train the actuator network for my unitree robot. Do you have any repo/code that you used to obtain the actuator network for anymal C (which is in resources)?

    Thank you.

    opened by Kashu7100 1
  • Self Collisions

    Self Collisions

    Hi, When we test our trained policy for uneven terrain we find the legs are crossing each other when we command the robot to turn around. Then we realize that the self-collision is turned off in config files for uneven terrain. Both in env/anymal_c/mixed_terrains and in env/a1. But interestingly, in flat ground the self-collision is enabled. Does self-collision does something wrong when training in uneven terrain? How to avoid legs crossing each other when commanding turning around for robots?

    opened by WangKeAlchemist 2
  • Getting cudaImportExternalMemory when trying to add camera sensor to a1 robot

    Getting cudaImportExternalMemory when trying to add camera sensor to a1 robot

    As I tried to place the camera at unitree A1 robot it always gives me an error in the line gym.create_camera_sensor(env, camera_props), stating: [Error] [carb.gym.plugin] cudaImportExternalMemory failed on rgbImage buffer with error 999

    This is the same error I get when I try running isaacgym/python/examples/interop_torch.py.

    Could you help me with this? Its been a month since I have asked for a solution at the Nvidia Developers forum (https://forums.developer.nvidia.com/t/cudaimportexternalmemory-failed-on-rgbimage/212944), but no one has answered to my query yet.

    opened by sprakashdash 0
Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
2D waypoints will be predefined in ROS based robots to navigate to the destination avoiding obstacles.

A number of 2D waypoints will be predefined in ROS based robots to navigate to the destination avoiding obstacles.

Arghya Chatterjee 5 Nov 5, 2022
A python file which I wrote to allow the Dorna Robots API to draw an Image on a 3D plane

Dorna-Robotics-Internship Code In the directory "Code" is a python file which I wrote to allow the Dorna Robots API to draw an Image on a 3D plane. I

Stephen Otto 2 Dec 6, 2021
The ABR Control library is a python package for the control and path planning of robotic arms in real or simulated environments.

The ABR Control library is a python package for the control and path planning of robotic arms in real or simulated environments. ABR Control provides API's for the Mujoco, CoppeliaSim (formerly known as VREP), and Pygame simulation environments, and arm configuration files for one, two, and three-joint models, as well as the UR5 and Kinova Jaco 2 arms. Users can also easily extend the package to run with custom arm configurations. ABR Control auto-generates efficient C code for generating the control signals, or uses Mujoco's internal functions to carry out the calculations.

Applied Brain Research 277 Jan 5, 2023
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 7, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

liegroups Python implementation of SO2, SE2, SO3, and SE3 matrix Lie groups using numpy or PyTorch. [Documentation] Installation To install, cd into t

STARS Laboratory 259 Dec 11, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 8, 2022
A Django app for managing robots.txt files following the robots exclusion protocol

Django Robots This is a basic Django application to manage robots.txt files following the robots exclusion protocol, complementing the Django Sitemap

Jazzband 406 Dec 26, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI Gym toolkit.

Robin Henry 99 Dec 12, 2022
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Lucas Alegre 74 Jan 3, 2023
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
This repository hosts the code for Stanford Pupper and Stanford Woofer, Raspberry Pi-based quadruped robots that can trot, walk, and jump.

This repository hosts the code for Stanford Pupper and Stanford Woofer, Raspberry Pi-based quadruped robots that can trot, walk, and jump.

Stanford Student Robotics 1.2k Dec 25, 2022
2D waypoints will be predefined in ROS based robots to navigate to the destination avoiding obstacles.

A number of 2D waypoints will be predefined in ROS based robots to navigate to the destination avoiding obstacles.

Arghya Chatterjee 5 Nov 5, 2022
A python file which I wrote to allow the Dorna Robots API to draw an Image on a 3D plane

Dorna-Robotics-Internship Code In the directory "Code" is a python file which I wrote to allow the Dorna Robots API to draw an Image on a 3D plane. I

Stephen Otto 2 Dec 6, 2021
This Python script can enumerate all URLs present in robots.txt files, and test whether they can be accessed or not.

Robots.txt tester With this script, you can enumerate all URLs present in robots.txt files, and test whether you can access them or not. Setup Clone t

Podalirius 32 Oct 10, 2022
This is a Anti Channel Ban Robots

AntiChannelBan This is a Anti Channel Ban Robots delete and ban message sent by channels Heroku Deployment ?? Heroku is the best way to host ur Projec

BᵣₐyDₑₙ 25 Dec 10, 2021
This is a repository to learn and get more computer vision skills, make robotics projects integrating the computer vision as a perception tool and create a lot of awesome advanced controllers for the robots of the future.

This is a repository to learn and get more computer vision skills, make robotics projects integrating the computer vision as a perception tool and create a lot of awesome advanced controllers for the robots of the future.

Elkin Javier Guerra Galeano 17 Nov 3, 2022
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

null 20 Jul 18, 2022
Retro Games in Gym

Status: Maintenance (expect bug fixes and minor updates) Gym Retro Gym Retro lets you turn classic video games into Gym environments for reinforcement

OpenAI 2.8k Jan 3, 2023
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022