FERM: A Framework for Efficient Robotic Manipulation

Related tags

Hardware ferm
Overview

Framework for Efficient Robotic Manipulation

FERM is a framework that enables robots to learn tasks within an hour of real time training. Project Page: https://sites.google.com/view/efficient-robotic-manipulation.

The project and this codebase are joint work by Albert Zhan*, Ruihan (Philip) Zhao*, Lerrel Pinto, Pieter Abbeel, Misha Laskin. The implementation is based off of RAD.

Getting Started

Create a conda environment, and install the necessary packages.

conda create -n ferm python=3.7
pip install -r requirements.txt

Running Experiments

Sample scripts are included in the scripts folder. This includes training, evaluating, as well as behavior cloning baselines. To launch the experiments, navigate the project root folder and run

./scripts/script_name.sh

Robotic Experiments

To run robotic experiments, create your gym environment interface with your robotic setup, and substitute the --domain_name flag with your registered environment name.

Using Demonstrations

Real world demonstrations

To use demonstrations, save the (obs, next_obs, actions, rewards, not_dones) demonstration tuple (a tuple of X length lists) into 0_X.pt, where X is the number of entries saved. Include the --replay_buffer_load_dir=work_directory_path/0_X.pt

Sim demonstrations

Our sim experiments use large amounts of demonstrations, which are generated on the fly through an expert policy that uses state input. Include the tags --demo_model_dir=path_to_expert --demo_model_step=X, where the expert policy is saved as path_to_expert/model/actor_X.pt and path_to_expert/model/critic_X.pt.

Comments
  • ./scripts/ferm_sim.sh  error

    ./scripts/ferm_sim.sh error

    when I run code

    ./scripts/ferm_sim.sh

    appeared

    CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm

    Screenshot from 2020-12-22 17-43-24

    I try add the code, but it is not work torch.cuda.set_device(0)

    opened by leeivan1007 3
  • How to run the inference upon the existing model

    How to run the inference upon the existing model

    Hii.. @albertzhan and @PhilipZRH , thanks for your amazing work.

    can you please provide a detailed instructions how to run your repository to establish your claimed results in the paper with the provided actor_.pth and critic_.pth file.

    Thanking you, Manna Akash

    opened by MlLearnerAkash 2
  • How does this framework connect to a real robotic arm?

    How does this framework connect to a real robotic arm?

    It seems that the source code only contains the simulation code. How is the connection with the real manipulator, the acquisition and transmission of data, and the transmission of control instructions completed?

    opened by borninfreedom 2
  • Wrong expert policy for FetchPush

    Wrong expert policy for FetchPush

    In trying to reproduce your experiments, we have run into the agent not learning in the FetchPush environment. The reason seems to be a faulty expert policy for this environment. I have attached a rendering of the policy in action illustrating its random behavior.

    https://user-images.githubusercontent.com/17391760/122923240-c7f75e80-d364-11eb-97d6-c9d818a1e1cc.mp4

    opened by vonHartz 1
  • GLFW and observation types

    GLFW and observation types

    Hello,

    I'm trying to reproduce your experiments, however, I run into some issues regarding GLFW.

    It appears that --observation_type pixel is needed. Otherwise, there's a problem with the replay buffer. Eg. with human:

    TypeError: Cannot cast array data from dtype('float64') to dtype('uint8') according to the rule 'same_kind'

    When trying to use the pixel observations, though, GlfwContext(offscreen=True) in env_wrapper._get_viewer(...) does not pass for me.

    On debian 11 (wayland) without GPU, I get

    GLFW error (code %d): %s 65545 b'EGL: Failed to find a suitable EGLConfig'
    

    Though this issue goes away when falling back to xorg, instead of wayland.

    More importantly, on a ubuntu machine with GPU (over ssh and with xvfb) I get:

    GLFW error (code %d): %s 65542 b'GLX: No GLXFBConfigs returned'
    GLFW error (code %d): %s 65545 b'GLX: Failed to find a suitable GLXFBConfig'
    

    I've already searched around for a few hours without finding a solution. Any idea what could be wrong?

    Thanks and all the best.

    opened by vonHartz 1
  • Questions about the real environment

    Questions about the real environment

    There is no information about the actual environment in the repository, so I have a few questions. How many steps per episode, and how many times are positive in the case of sparse reward? Also, how many iterations of warmup sac?

    opened by bgyooPtr 1
  • Questions on how to use

    Questions on how to use

    I try to use your framework in my real environment. But, RealRobotEnv-v0 is missing from your code. I am trying to create my environment by referring to RealRobotEnv-v0 domain. So, I wish it was added to the code.

    And what is RealArm in the link below? Is it different from RealRobotEnv-v0 in the script? https://github.com/PhilipZRH/ferm/blob/de990bf0c916f9d81861cdff210137fa7f67d0ce/env_wrapper.py#L27

    opened by hahamini 1
  • questions about pre-train

    questions about pre-train

    Hello. I have several questions. What is the function of line 348 of train.py? https://github.com/PhilipZRH/ferm/blob/de990bf0c916f9d81861cdff210137fa7f67d0ce/train.py#L348 Is it a function to create and add a demonstration to the replay buffer in the simulation? Is this a function not necessary in a real robot environment?

    opened by hahamini 0
Owner
Ruihan (Philip) Zhao
Student at UC Berkeley.
Ruihan (Philip) Zhao
Robo Arm :: Rigging is a rigging addon for Blender that helps animating industrial robotic arms.

Robo Arm :: Rigging Robo Arm :: Rigging is a rigging addon for Blender that helps animating industrial robotic arms. It construct serial links(a kind

null 2 Nov 18, 2021
Example Python code for building RPi-controlled robotic systems

RPi Example Code Example Python code for building RPi-controlled robotic systems These python files have been compiled / developed by the Neurobionics

Elliott Rouse 2 Feb 4, 2022
Scapy: the Python-based interactive packet manipulation program & library. Supports Python 2 & Python 3.

Scapy Scapy is a powerful Python-based interactive packet manipulation program and library. It is able to forge or decode packets of a wide number of

SecDev 8.3k Jan 8, 2023
Event-based hardware simulation framework

An event-based multi-device simulation framework providing configuration and orchestration of complex multi-device simulations.

Diamond Light Source Controls Group 3 Feb 1, 2022
A battery pack simulation tool that uses the PyBaMM framework

Overview of liionpack liionpack takes a 1D PyBaMM model and makes it into a pack. You can either specify the configuration e.g. 16 cells in parallel a

PyBaMM Team 40 Jan 5, 2023
Robot Framework keyword library wrapper for atlassian-python-api

Robot Framework keyword library wrapper for atlassian-python-api

Marcin Koperski 3 Jul 29, 2022
Python information display framework aimed at e-ink devices

My display, using a Raspberry Pi Zero W and Waveshare 6" e-paper hat infodisplay Modular information display framework aimed at e-ink devices. Built u

Niek Blankers 3 Apr 8, 2022
This is an incredible led matrix simulation using the ultimate mosaik co-simulation framework.

This project uses the mosaik co-simulation framework, developed by the brilliant developers at the high-ranked Offis institue for computer science, Oldenburg, Germany, to simulate multidimensional LED matrices.

Felix 1 Jan 28, 2022
ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm

ManipulaTHOR: A Framework for Visual Object Manipulation Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha

AI2 65 Dec 30, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).

Google Research 367 Jan 9, 2023
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

null 246 Dec 11, 2022
RGB-stacking 🛑 🟩 🔷 for robotic manipulation

RGB-stacking ?? ?? ?? for robotic manipulation BLOG | PAPER | VIDEO Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes, Alex X. Lee*,

DeepMind 95 Dec 23, 2022
A repository of PyBullet utility functions for robotic motion planning, manipulation planning, and task and motion planning

pybullet-planning (previously ss-pybullet) A repository of PyBullet utility functions for robotic motion planning, manipulation planning, and task and

Caelan Garrett 260 Dec 27, 2022
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

?? The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Hugging Face 15k Jan 2, 2023
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

null 47 Oct 11, 2022
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
The ABR Control library is a python package for the control and path planning of robotic arms in real or simulated environments.

The ABR Control library is a python package for the control and path planning of robotic arms in real or simulated environments. ABR Control provides API's for the Mujoco, CoppeliaSim (formerly known as VREP), and Pygame simulation environments, and arm configuration files for one, two, and three-joint models, as well as the UR5 and Kinova Jaco 2 arms. Users can also easily extend the package to run with custom arm configurations. ABR Control auto-generates efficient C code for generating the control signals, or uses Mujoco's internal functions to carry out the calculations.

Applied Brain Research 277 Jan 5, 2023
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022