RL Algorithms with examples in Python / Pytorch / Unity ML agents

Overview

Reinforcement Learning Project

This project was created to make it easier to get started with Reinforcement Learning. It now contains:

Getting Started

Install Basic Dependencies

To set up your python environment to run the code in the notebooks, follow the instructions below.

  • If you're on Windows I recommend installing Miniforge. It's a minimal installer for Conda. I also recommend using the Mamba package manager instead of Conda. It works almost the same as Conda, but only faster. There's a cheatsheet of Conda commands which also work in Mamba. To install Mamba, use this command:
conda install mamba -n base -c conda-forge 
  • Create (and activate) a new environment with Python 3.6 or later. I recommend using Python 3.9:

    • Linux or Mac:
    mamba create --name rl39 python=3.9 numpy
    source activate rl39
    • Windows:
    mamba create --name rl39 python=3.9 numpy
    activate rl39
  • Install PyTorch by following instructions on Pytorch.org. For example, to install PyTorch on Windows with GPU support, use this command:

mamba install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
  • Install additional packages:
mamba install jupyter notebook matplotlib
python -m ipykernel install --user --name rl39 --display-name "rl39"
  • Change the kernel to match the rl39 environment by using the drop-down menu Kernel -> Change kernel inside Jupyter Notebook.

Install Unity Machine Learning Agents

Note: In order to run the notebooks on Windows, it's not necessary to install the Unity Editor, because I have provided the standalone executables of the environments for you.

Unity ML Agents is the software that we use for the environments. The agents that we create in Python can interact with these environments. Unity ML Agents consists of several parts:

  • The Unity Editor is used for creating environments. To install:

    • Install Unity Hub.
    • Install the latest version of Unity by clicking on the green button Unity Hub on the download page.

    To start the Unity editor you must first have a project:

    • Start the Unity Hub.
    • Click on "Projects"
    • Create a new dummy project.
    • Click on the project you've just added in the Unity Hub. The Unity Editor should start now.
  • The Unity ML-Agents Toolkit. Download the latest release of the source code or use the Git command: git clone --branch release_18 https://github.com/Unity-Technologies/ml-agents.git.

  • The Unity ML Agents package is used inside the Unity Editor. Please read the instructions for installation.

  • The mlagents Python package is used as a bridge between Python and the Unity editor (or standalone executable). To install, use this command: python -m pip install mlagents==0.27.0. Please note that there's no conda package available for this.

Install an IDE for Python

For Windows, I would recommend using PyCharm (my choice), or Visual Studio Code. Inside those IDEs you can use the Conda environment you have just created.

Creating a custom Unity executable

Load the examples project

The Unity ML-Agents Toolkit contains several example environments. Here we will load them all inside the Unity editor:

  • Start the Unity Hub.
  • Click on "Projects"
  • Add a project by navigating to the Project folder inside the toolkit.
  • Click on the project you've just added in the Unity Hub. The Unity Editor should start now.

Create a 3D Ball executable

The 3D Ball example contains 12 environments in one, but this doesn't work very well in the Python API. The main problem is that there's no way to reset each environment individually. Therefore, we will remove the other 11 environments in the editor:

  • Load the 3D Ball scene, by going to the project window and navigating to Examples -> 3DBall -> Scenes-> 3DBall
  • In the Hierarchy window select the other 11 3DBall objects and delete them, so that only the 3DBall object remains.

Next, we will build the executable:

  • Go to File -> Build Settings
  • In the Build Settings window, click Build
  • Navigate to notebooks folder and add 3DBall to the folder name that is used for the build.

Instructions for running the notebooks

  1. Download the Unity executables for Windows. In case you're not on Windows, you have to build the executables yourself by following the instructions above.
  2. Place the Unity executable folders in the same folder as the notebooks.
  3. Load a notebook with Jupyter notebook. (The command to start Jupyter notebook is jupyter notebook)
  4. Follow further instructions in the notebook.
You might also like...
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.
Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.

Pacman AI Jussi Doherty CAP 4601 - Introduction to Artificial Intelligence - Fall 2020 Python version 3.0+ Source of this project This repo contains a

Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

TensorRT examples (Jetson, Python/C++)(object detection)
TensorRT examples (Jetson, Python/C++)(object detection)

TensorRT examples (Jetson, Python/C++)(object detection)

Hi Guys, here I am providing examples, which will help you in Lerarning Python

LearningPython Hi guys, here I am trying to include as many practice examples of Python Language, as i Myself learn, and hope these will help you in t

Releases(v1.0.0)
Owner
Rogier Wachters
Rogier Wachters
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 9, 2023
Some toy examples of score matching algorithms written in PyTorch

toy_gradlogp This repo implements some toy examples of the following score matching algorithms in PyTorch: ssm-vr: sliced score matching with variance

Ending Hsiao 21 Dec 26, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 8, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).

Google Research 367 Jan 9, 2023
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 7, 2022
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021