BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

Overview

BasicRL: easy and fundamental codes for deep reinforcement learning

BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

It is developped for beginner in DRL with the following advantages:

  • Practical: it fills the gap between the theory and practice of DRL.
  • Easy: the codes is easier than OpenAI Spinning Up in terms of achieving the same functionality.
  • Lightweight: the core codes <1,500 lines, using Pytorch ans OpenAI Gym.

The following DRL algorithms is contained in BasicRL:

  • DQN, DoubleDQN, DuelingDQN, NoisyDQN, DistributionalDQN
  • REINFORCE, VPG, PPO, DDPG, TD3 and SAC
  • PerDQN, N-step-learning DQN and Rainbow are coming

The differences compared to OpenAI Spinning Up:

  • Pros: BasicRL is currently can be used on Windows and Linux (it hasn't been extensively tested on OSX). However, Spinning Up is only supported on Linux and OSX.
  • Cons: OpenMPI is not used in BasicRL so it is slower than Spinning Up.
  • Others: BasicRL considers an agent as a class.

The differences compared to rainbow-is-all-you-need:

  • Pros: BasicRL reuse the common codes, so it is lightwight. Besides, BasicRL modifies the form of output and plot, it can use the Spinning Up's log file.
  • Others: BasicRL uses inheritance of classes, so you can see key differences between each other.

File Structure

BasicRL:

├─pg    
│  └─reinforce/vpg/ppo/ddpg/td3/sac.py    
│  └─utils.py      
│  └─logx.py     
├─pg_cpu     
│  └─reinforce/vpg/ppo/ddpg/td3/sac.py  
│  └─utils.py  
│  └─logx.py  
├─rainbow     
│  └─dqn/double_dqn/dueling_dqn/moisy_dqn/distributional_dqn.py  
│  └─utils.py   
│  └─logx.py   
├─requirements.txt  
└─plot.py

Code Structure

Core code

xxx.py(dqn.py...)

- agent class:
  - init
  - compute loss
  - update
  - get action
  - test agent
  - train
- main

Common code

utils.py

- expereience replay buffer: On-policy/Off-policy replay buffer
- network  

logx.py

- Logger
- EpochLogger

plot.py

- plot data
- get datasets
- get all datasets
- make plots
- main

Installation

BasicRL is tested on Anaconda virtual environment with Python3.7+

conda create -n BasicRL python=3.7
conda activate BasicRL

Clone the repository:

git clone [email protected]:RayYoh/BasicRL.git
cd BasicRL

Install required libraries:

pip install -r requirements.txt

BasicRL code library makes local experiments easy to do, and there are two ways to run them: either from the command line, or through function calls in scripts.

Experiment

After testing, Basic RL runs perfectly, but its performance has not been tested. Users can tweak the parameters and change the experimental environment to output final results for comparison. Possible outputs are shown below:

dqn pg

Contribution

BasicRL is not yet complete and I will continue to maintain it. To any interested in making BasicRL better, any contribution is warmly welcomed. If you want to contribute, please send a Pull Request.
If you are not familiar with creating a Pull Request, here are some guides:

Related Link

Citation

To cite this repository:

@misc{lei,
  author = {Lei Yao},
  title = {BasicRL: easy and fundamental codes for deep reinforcement learning},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/RayYoh/BasicRL}},
}
You might also like...
🔥 TensorFlow Code for technical report:
🔥 TensorFlow Code for technical report: "YOLOv3: An Incremental Improvement"

🆕 Are you looking for a new YOLOv3 implemented by TF2.0 ? If you hate the fucking tensorflow1.x very much, no worries! I have implemented a new YOLOv

Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline

vqvae_dwt_distiller.pytorch Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline. It allows to generate 512x512 ima

Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Implementation of ConvMixer-Patches Are All You Need? in TensorFlow and Keras
Implementation of ConvMixer-Patches Are All You Need? in TensorFlow and Keras

Patches Are All You Need? - ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in t

The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

PixelPick This is an official implementation of the paper
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Per-Pixel Classification is Not All You Need for Semantic Segmentation
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Unofficial PyTorch implementation of Fastformer based on paper
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

Fastformer-PyTorch Unofficial PyTorch implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Usage : import t

An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Comments
  • I can't set it, please help me.

    I can't set it, please help me.

    when I do "pip install -r requirements.txt", then" [root@110 BasicRL]# pip3 install -r requirements.txt
    

    WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. Collecting gym==0.19.0 Using cached gym-0.19.0.tar.gz (1.6 MB) Preparing metadata (setup.py) ... done ERROR: Could not find a version that satisfies the requirement numpy==1.20.3 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5) ERROR: No matching distribution found for numpy==1.20.3" Maybe the reasion is that my Python Interpreter is Python 3.8(base)(3) ~/anaconda3/bin/python3.8?

    opened by stomaticthedc 2
Owner
RayYoh
Research interests: Robot Learning, Robotic
RayYoh
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI

PyTorch implementation of OpenAI's Finetuned Transformer Language Model This is a PyTorch implementation of the TensorFlow code provided with OpenAI's

Hugging Face 1.4k Jan 5, 2023
A numpy-based implementation of RANSAC for fundamental matrix and homography estimation. The degeneracy updating and local optimization components are included and optional.

Description A numpy-based implementation of RANSAC for fundamental matrix and homography estimation. The degeneracy updating and local optimization co

AoxiangFan 9 Nov 10, 2022
Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Mingrui Yu 3 Jan 7, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 9, 2023
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 310 Dec 28, 2022