Model-based reinforcement learning in TensorFlow

Overview

Bellman

PyPI version Coverage Status Quality checks Slow tests Docs build Code style: black Slack Status

Website | Twitter | Documentation (latest)

What does Bellman do?

Bellman is a package for model-based reinforcement learning (MBRL) in Python, using TensorFlow and building on top of model-free reinforcement learning package TensorFlow Agents.

Bellman provides a framework for flexible composition of model-based reinforcement learning algorithms. It offers two major classes of algorithms: decision time planning and background planning algorithms. With each class any kind of supervised learning method can be easily used to learn certain component of the environment. Bellman was designed with modularity in mind - important components can be flexibly combined, such as type of decision time planning method (e.g. a cross entropy method or a random shooting method) and type of model for state transition (e.g. a probabilistic neural network or an ensemble of neural networks). Bellman also provides implementation of several popular state-of-the-art MBRL algorithms, such as PETS, MBPO and METRPO. The online documentation (latest) contains more details.

Bellman requires Python 3.7 onwards and uses TensorFlow 2.4+ for running computations, which allows fast execution on GPUs.

Maintainers

Bellman was originally created by (in alphabetical order) Vincent Adam, Jordi Grau-Moya, Felix Leibfried, John A. McLeod, Hrvoje Stojic, and Peter Vrancx, at Secondmind Labs.

It is now actively maintained by (in alphabetical order) Felix Leibfried, John A. McLeod, Hrvoje Stojic, and Peter Vrancx.

Bellman is an open source project. If you have relevant skills and are interested in contributing then please do contact us (see "The Bellman Community" section below).

We are very grateful to our Secondmind Labs colleagues, maintainers of GPflow and Trieste in particular, for their help with creating contributing guidelines, instructions for users and open-sourcing in general.

Install Bellman

For users

For latest (stable) release from PyPI you can use pip to install the toolbox

$ pip install bellman

Use pip to install the toolbox from latest source from GitHub. Check-out the develop branch of the Bellman GitHub repository, and in the repository root run

$ pip install -e .

This will install the toolbox in editable mode.

For contributors

If you wish to contribute please use Poetry to manage dependencies in a local virtual environment. Poetry configuration file specifies all the development dependencies (testing, linting, typing, docs etc) and makes it much easier to contribute. To install Poetry, follow the instructions in the Poetry documentation.

To install this project in editable mode, run the commands below from the root directory of the bellman repository.

poetry install

This command creates a virtual environment for this project in a hidden .venv directory under the root directory. You can easily activate it with

poetry shell

You must also run the poetry install command to install updated dependencies when the pyproject.toml file is updated, for example after a git pull.

Installing MuJoCo (Optional)

Many benchmarks in continuous control in MBRL use the MuJoCo physics engine. Some of the TF-Agents examples have been tested against Mujoco environments as well. MuJoCo is proprietary software that requires a license (see MuJoCo website). As a result installing it is optional, but because of its importance to the research community it is highly recommended. Don't worry if you decide not to install MuJoCo though, all our examples and notebooks rely on standard environments available in OpenAI Gym.

We interface with MuJoCo through a python library mujoco-py via OpenAI Gym (mujoco-py github page). Check the installation instructions there on how to install MuJoCo. Note that you should install MuJoCo 1.5 since OpenAI Gym supports that version. After that you can install mujoco-py library with an additional Poetry command:

poetry install -E mujoco-py

If this command fails, please check troubleshooting sections at mujoco-py github page, you might need to satisfy other mujoco-py dependencies (e.g. Linux system libraries) or set some environment variables.

The Bellman Community

Getting help

Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of Bellman that in some sense involves changing the Bellman code itself. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us.

"How-to-use" questions: Please use Stack Overflow (Bellman tag) to ask questions that relate to "how to use Bellman", i.e. questions of understanding rather than issues that require changing Bellman code. (If you are unsure where to ask, you are always welcome to open a GitHub issue; we may then ask you to move your question to Stack Overflow.)

Slack workspace

We have a public Bellman slack workspace. Please use this invite link if you'd like to join, whether to ask short informal questions or to be involved in the discussion and future development of Bellman.

Contributing

All constructive input is very much welcome. For detailed information, see the guidelines for contributors.

Citing Bellman

To cite Bellman, please reference our arXiv paper where we review the framework and describe the design. Sample Bibtex is given below:

@article{bellman2021,
    author = {McLeod, John and Stojic, Hrvoje and Adam, Vincent and Kim, Dongho and Grau-Moya, Jordi and Vrancx, Peter and Leibfried, Felix},
    title = {Bellman: A Toolbox for Model-based Reinforcement Learning in TensorFlow},
    year = {2021},
    journal = {arXiv:2103.14407},
    url = {https://arxiv.org/abs/2103.14407}
}

License

Apache License 2.0

Comments
  • Dongho/tensorflow 2.5

    Dongho/tensorflow 2.5

    PR type: bugfix / enhancement / new feature / doc improvement

    Related issue(s)/PRs:

    Summary

    Proposed changes

    • Quick fix setup.py to version up tensorflow and other related packages

    What alternatives have you considered?

    Minimal working example

    PR checklist

    • [ ] New features: code is well-documented
      • [ ] detailed docstrings (API documentation)
      • [ ] notebook examples (usage demonstration)
    • [ ] The bug case / new feature is covered by unit tests
    • [ ] Code has type annotations
    • [ ] I ran the black+isort formatter
    • [ ] I locally tested that the tests pass

    Release notes

    Fully backwards compatible: yes

    If not, why is it worth breaking backwards compatibility:

    Commit message (for release notes):

    • Quick fix for setup.py
    opened by dongho-kim 1
  • setting things up for pypi

    setting things up for pypi

    2 things I would need some help with:

    • pyproject.toml - [build-system] currently points to poetry, is that fine for building a package for pip?
    • I'm not convinced we need all the libraries listed in install_requires in setup.py - @johnamcleod you were taking care of dependencies before, can you give a hand here please?

    I have set up a workflow for psuhing things to PyPi automatically, not sure how to test it though (hm, perhaps I could modfiy it to use test PyPi...) I will first push things to test PyPi, to verify things work as intended

    enhancement 
    opened by hstojic 1
  • Dongho/tensorflow 2.5

    Dongho/tensorflow 2.5

    PR type: enhancement

    Related issue(s)/PRs:

    Summary

    Proposed changes

    • Support tensorflow 2.5, tf-agents 0.8.0 and tensorflow-probability 0.12.2
    • Fixes for test errors which possibly occurs on Mac (inc. Apple Silicon) environment

    What alternatives have you considered?

    Minimal working example

    NA as no new features added

    PR checklist

    • [ ] New features: code is well-documented
      • [ ] detailed docstrings (API documentation)
      • [ ] notebook examples (usage demonstration)
    • [ ] The bug case / new feature is covered by unit tests
    • [ ] Code has type annotations
    • [ ] I ran the black+isort formatter
    • [X] I locally tested that the tests pass

    Release notes

    Fully backwards compatible: no

    If not, why is it worth breaking backwards compatibility:

    Changes in TFAgent.init introduced in later tf-agents seem to break backwards compatibility, causing errors when we pass TRAIN_ARGSPEC. However this is worth breaking due to the security vulnerability in tensorflow 2.4.0.

    Commit message (for release notes):

    • Support tensorflow 2.5, tf-agents 0.8.0 and tensorflow-probability 0.12.2
    enhancement good first issue 
    opened by dongho-kim 0
  • Add MBPO train_eval function

    Add MBPO train_eval function

    PR type: enhancement

    Related issue(s)/PRs: fix #24

    Summary

    Proposed changes The MBPO agent does not have a train_eval function in the benchmark package. This PR fixes that.

    What alternatives have you considered?

    Minimal working example

    Look at the run_mbpo example.

    Release notes

    Fully backwards compatible: yes If not, why is it worth breaking backwards compatibility:

    Commit message (for release notes):

    • Add a train_eval function for the MBPO agent.
    enhancement 
    opened by johnamcleod 0
  • John/fix none loss in harness

    John/fix none loss in harness

    PR type: bugfix

    **Related issue(s)/PRs: N/A

    Summary

    Proposed changes There is an integration issue between the TFTrainingScheduler and the ExperimentHarness where if the call to the agent trainer's train_step method returns None for the loss, the harness throws an exception when trying to write the logs. This situation can occur when insufficiently many environment steps have passed to train a model-free agent component of a model-based agent.

    This PR addresses the issue by intercepting the None loss from the agent trainer in the scheduler and not adding it to the training_info dictionary.

    Minimal working example

    The run_mbpo example hits this problem on the first environment time step.

    PR checklist

    • [ ] New features: code is well-documented
      • [ ] detailed docstrings (API documentation)
      • [ ] notebook examples (usage demonstration)
    • [x] The bug case / new feature is covered by unit tests
    • [x] Code has type annotations
    • [x] I ran the black+isort formatter
    • [x] I locally tested that the tests pass

    Release notes

    Fully backwards compatible: yes

    If not, why is it worth breaking backwards compatibility:

    Commit message (for release notes):

    • ...
    bug 
    opened by johnamcleod 0
  • upload-pypi.yaml fails on `main`

    upload-pypi.yaml fails on `main`

    GH action fails on "Verify git tag vs. VERSION" step, $GITHUB_REF env variable seems to come with refs/tags/ bit pre-pended, which code does not allow for - here is a solution: https://github.community/t/how-to-get-just-the-tag-name/16241

    bug 
    opened by hstojic 0
  • Release/0.1.0

    Release/0.1.0

    updated develop with few small corrections for merging into main as a (pre-)release 0.1.0 it seems we can then create a release out of that version of main on GH with a description of the changelog. That should create a tag.

    release 
    opened by hstojic 0
  • Hstojic/trigger docs

    Hstojic/trigger docs

    modified a github action to trigger generating documentation in the website repo instead action sends an event that an action in website repo is listening to tested and it seems to work, check https://belman.dev/docs

    see:

    • https://docs.github.com/en/actions/reference/events-that-trigger-workflows#external-events-repository_dispatch
    • https://docs.github.com/en/rest/reference/repos#create-a-repository-dispatch-event
    • https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads#repository_dispatch
    documentation enhancement 
    opened by hstojic 0
  • Felix/initial commit

    Felix/initial commit

    PR type: bugfix / enhancement / new feature / doc improvement

    Related issue(s)/PRs:

    Summary

    Proposed changes

    • ...
    • ...
    • ...

    What alternatives have you considered?

    Minimal working example

    # Put your example code in here
    

    PR checklist

    • [ ] New features: code is well-documented
      • [ ] detailed docstrings (API documentation)
      • [ ] notebook examples (usage demonstration)
    • [ ] The bug case / new feature is covered by unit tests
    • [ ] Code has type annotations
    • [ ] I ran the black+isort formatter
    • [ ] I locally tested that the tests pass

    Release notes

    Fully backwards compatible: yes / no

    If not, why is it worth breaking backwards compatibility:

    Commit message (for release notes):

    • ...
    opened by fleibfried 0
  • poetry task check_requirements

    poetry task check_requirements

    Feature request

    Different from the description in CONTRIBUTING.md, it doesn't seem that we can run poetry run task check_requirements as the task doesn't seem to be defined anywhere. Would be great to add this feature back.

    Motivation

    Is your feature request related to a problem?

    It is unclear how to automatically update setup.py when we update poetry.

    Proposal

    Describe the solution you would like

    What alternatives have you considered?

    Are you willing to open a pull request? (We really appreciate contributions!)

    Additional context

    enhancement 
    opened by dongho-kim 0
Releases(v0.1.0)
  • v0.1.0(Apr 7, 2021)

    First release, 0.1.0

    (well, a pre-release actually :)

    What is Bellman?

    Bellman is a package for model-based reinforcement learning (MBRL) in Python, using TensorFlow 2.4+ and building on top of model-free reinforcement learning package TensorFlow Agents.

    Main features

    • A framework for flexible composition of model-based reinforcement learning algorithms.
    • It offers modular components for composing two major classes of algorithms:
      1. decision time planning
      2. background planning
    • Keras neural networks for modeling transition dynamics
    • Rewards, termination and initial state distributions are assumed to be known for now
    • Implementations of several state-of-the-art model-based algorithms (PETS, MBPO and METRPO) and one model-free algorithm (TRPO)
    Source code(tar.gz)
    Source code(zip)
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
A pytorch reprelication of the model-based reinforcement learning algorithm MBPO

Overview This is a re-implementation of the model-based RL algorithm MBPO in pytorch as described in the following paper: When to Trust Your Model: Mo

Xingyu Lin 93 Jan 5, 2023
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility functions that allow writing model-based RL algorithms with only a few lines of code.

Facebook Research 724 Jan 4, 2023
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"

Optimal Model Design for Reinforcement Learning This repository contains JAX code for the paper Control-Oriented Model-Based Reinforcement Learning wi

Evgenii Nikishin 43 Sep 28, 2022
On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Facebook Research 46 Dec 15, 2022
Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning

Human-Level Control through Deep Reinforcement Learning Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning. This imp

Devsisters Corp. 2.4k Dec 26, 2022
Tensorforce: a TensorFlow library for applied reinforcement learning

Tensorforce: a TensorFlow library for applied reinforcement learning Introduction Tensorforce is an open-source deep reinforcement learning framework,

Tensorforce 3.2k Jan 2, 2023
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Microsoft 983 Dec 23, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI Gym toolkit.

Robin Henry 99 Dec 12, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 3, 2023
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Dirk Neuhäuser 6 Dec 8, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Kartikay Garg 669 Dec 29, 2022
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 8, 2023