Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Related tags

Deep Learning YARR
Overview

Logo Missing

Note: Pirate qualification not needed to use this library.

YARR is Yet Another Robotics and Reinforcement learning framework for PyTorch.

The framework allows for asynchronous training (i.e. agent and learner running in separate processes), which makes it suitable for robot learning. For an example of how to use this framework, see my Attention-driven Robot Manipulation (ARM) repo.

This project is mostly intended for my personal use and facilitate my research.

Install

Ensure you have PyTorch installed. Then simply run:

pip install .
Comments
  • Question about EnvRunner: multiple environments but one buffer

    Question about EnvRunner: multiple environments but one buffer

    Hi, Thanks a ton for sharing your amazing framework.

    I have one question regarding the EnvRunner class. I apologize if I misunderstood something and the question is not relevant.

    The EnvRunner class allows multiple processes to collect samples in separate training environments, but they all seem to add their samples to the same replay buffer. Doesn't this cause a problem when we sample from this buffer (contiguous samples in the buffer are no longer sequentially tied)?

    Thanks!

    opened by telejesus2 1
  • Suppress warning when initialize tensor from list of ndarray

    Suppress warning when initialize tensor from list of ndarray

    <stdin>:1: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  ../torch/csrc/utils/tensor_new.cpp:201.)
    
    opened by wkentaro 0
  • Install natsort

    Install natsort

    To avoid the following error:

    Traceback (most recent call last):
      File "./train.py", line 15, in <module>
        from yarr.replay_buffer.prioritized_replay_buffer import (
      File "/home/wkentaro/projectX/.anaconda3/lib/python3.7/site-packages/yarr/replay_buffer/prioritized_replay_buffer.py", line 10, in <module>
        from .uniform_replay_buffer import *
      File "/home/wkentaro/projectX/.anaconda3/lib/python3.7/site-packages/yarr/replay_buffer/uniform_replay_buffer.py", line 22, in <module>
        from natsort import natsort
    ModuleNotFoundError: No module named 'natsort'
    
    opened by wkentaro 0
  • Pass transition.info from env.step to ReplayTransition

    Pass transition.info from env.step to ReplayTransition

    Same as multi_task_rollout_generator

    https://github.com/stepjam/YARR/blob/ff2128efc7172166e9985cca1310725ff9384d29/yarr/utils/multi_task_rollout_generator.py#L53-L59

    opened by wkentaro 0
  • Compatibility with a debugger

    Compatibility with a debugger

    Hi!

    Thanks for this package. It is remarkably well written!

    However, it seems that yarr is necessarily creating new processes, while a debugger such as pdb does not work at all with sub processes.

    What I'd like to do basically is being able to go step by step over my RL algo. Is there a way to do that?

    Thanks, PL

    opened by guhur 0
Owner
Stephen James
Postdoc
Stephen James
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

null 43 Nov 21, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 7, 2023
The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Armer Driver Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways: Joint vel

QUT Centre for Robotics (QCR) 13 Nov 26, 2022
Reinforcement learning framework and algorithms implemented in PyTorch.

Reinforcement learning framework and algorithms implemented in PyTorch.

Robotic AI & Learning Lab Berkeley 2.1k Jan 4, 2023
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Automatic, Readable, Reusable, Extendable Machin is a reinforcement library designed for pytorch. Build status Platform Status Linux Windows Supported

Iffi 348 Dec 24, 2022
Learning to trade under the reinforcement learning framework

Trading Using Q-Learning In this project, I will present an adaptive learning model to trade a single stock under the reinforcement learning framework

Uirá Caiado 470 Nov 28, 2022
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

Ilya Kostrikov 3k Dec 31, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022