Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Overview

Trading Gym

Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently composed of a single environment and implements a generic way of feeding this trading environment different type of price data.

Installation

pip install tgym

We strongly recommend using virtual environments. A very good guide can be found at http://python-guide-pt-br.readthedocs.io/en/latest/dev/virtualenvs/.

The trading environment: SpreadTrading

SpreadTrading is a trading environment allowing to trade a spread (see https://en.wikipedia.org/wiki/Spread_trade). We feed the environment a time series of prices (bid and ask) for n different products (with a DataGenerator), as well as a list of spread coefficients. The possible actions are then buying, selling or holding the spread. Actions cannot be taken on one or several legs in isolation. The state of the environment is defined as: prices, entry price and position (whether long, short or flat).

Create your own DataGenerator

To create your own data generator, it must inherit from the DataGenerator base class which can be found in the file 'tgym/core.py'. It consists of four methods. Only the private _generator method which defines the times series needs to be overridden. Example can be found at examples/generator_random.py. For only one product, the _generator method must yield a (bid, ask) tuple, one element at a time. For two or more products, you must return a tuple consisting of bid and ask prices for each product, concatenated. For instance for two products, the method should yield (bid_1, ask_1, bid_2, ask_2). The logic for the time series is encoded there.

Compatibility with OpenAI gym

Our environments API is strongly inspired by OpenAI Gym. We aim to entirely base it upon OpenAI Gym architecture and propose Trading Gym as an additional OpenAI environment.

Examples

Some examples are available in tgym/examples/

To run the dqn_agent.py example, you will need to also install keras with pip install keras. By default, the backend will be set to Theano. You can also run it with Tensorflow by installing it with pip install tensorflow. You then need to edit ~/.keras/keras.json and make sure "backend": "tensorflow" is specified.

Comments
  • examples/trading_environment.py line 27 action = raw_input(

    examples/trading_environment.py line 27 action = raw_input("Action: Buy (b) / Sell (s) / Hold (enter): ") NameError: name 'raw_input' is not defined

    python3 ./trading_environment.py Traceback (most recent call last): File "./trading_environment.py", line 27, in action = raw_input("Action: Buy (b) / Sell (s) / Hold (enter): ") NameError: name 'raw_input' is not defined

    raw_input() function not present when I executed the script on python v3.5.2 and openai gym v0.12.1.

    Thanks

    opened by dragon28 1
  • Inconsistent results

    Inconsistent results

    Hi, multiple runs of keras_example.py are producing inconsistent results:

    Correct result: screen shot 2018-01-01 at 3 31 36 pm

    Incorrect results: screen shot 2018-01-01 at 3 28 22 pm and screen shot 2018-01-01 at 3 27 27 pm

    I'm running Python 3.6.4 with Tensorflow 1.4.1 and Keras 2.1.2

    Ideas on why I'm seeing this? Is this because DQN is unstable, or is there some error?

    opened by sohels 0
  • Close handler

    Close handler

    The dqn example at the end loops through render while playing. If I close the plot, I want to exit the program, this gives me info necessary to detect close action

    opened by pm-beba 0
  • ModuleNotFoundError: No module named 'csvstream'

    ModuleNotFoundError: No module named 'csvstream'

    Good morning, I have the following issue: when I try to run the following code:

    import random
    import sys
    sys.path.append('.\dqMM')
    import numpy as np
    import matplotlib.pyplot as plt
    from tgym.gens.csvstream import CSVStreamer
    from tqdm import tqdm 
    

    I get the following error: ` 4 import numpy as np 5 import matplotlib.pyplot as plt ----> 6 from tgym.gens.csvstream import CSVStreamer 7 from tqdm import tqdm 8 #To ignore warnings that are annoying.

    /usr/local/lib/python3.6/dist-packages/tgym/gens/init.py in ----> 1 from csvstream import * 2 from deterministic import * 3 from random import *

    ModuleNotFoundError: No module named 'csvstream'`

    I have already tried to modify the mentioned file with: from tgym.gens.csvstream import * but without results. I use python 3.6 on ubuntu 18.04. If you need any further details I'll be happy to provide them. Thank you for your kind attention.

    opened by G10VA 0
  • matplotlib dependency soft package version

    matplotlib dependency soft package version

    It is not required to have a hard matplotlib version for install. Removing this hard version dependency makes it compatible with newer python environments.

    opened by pranjaldhole 0
  • core.py

    core.py", line 71, in next return next(self.generator) StopIteration

    After some episode I get this error. Some times it happens in later episodes, some time sooner. Can't understand where does it come from.

    There is any way to communicate with you directly, even buy payment.

    Kind Regards.

    opened by Kalelv45 0
Owner
Dimitry Foures
Dimitry Foures
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility functions that allow writing model-based RL algorithms with only a few lines of code.

Facebook Research 724 Jan 4, 2023
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

null 1 Nov 10, 2021
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI Gym toolkit.

Robin Henry 99 Dec 12, 2022
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Lucas Alegre 74 Jan 3, 2023
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
[IROS'21] SurRoL: An Open-source Reinforcement Learning Centered and dVRK Compatible Platform for Surgical Robot Learning

SurRoL IROS 2021 SurRoL: An Open-source Reinforcement Learning Centered and dVRK Compatible Platform for Surgical Robot Learning Features dVRK compati

Med-AIR@CUHK 55 Jan 3, 2023
Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies.

Crypto_Bot Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies. Steps to get started using the bot: Sign up

null 21 Oct 3, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
An open source Python package for plasma science that is under development

PlasmaPy PlasmaPy is an open source, community-developed Python 3.7+ package for plasma science. PlasmaPy intends to be for plasma science what Astrop

PlasmaPy 444 Jan 7, 2023
This is an open source library implementing hyperbox-based machine learning algorithms

hyperbox-brain is a Python open source toolbox implementing hyperbox-based machine learning algorithms built on top of scikit-learn and is distributed

Complex Adaptive Systems (CAS) Lab - University of Technology Sydney 21 Dec 14, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
A Python-based development platform for automated trading systems - from backtesting to optimisation to livetrading.

AutoTrader AutoTrader is Python-based platform intended to help in the development, optimisation and deployment of automated trading systems. From sim

Kieran Mackle 485 Jan 9, 2023
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 2, 2023
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Kartikay Garg 669 Dec 29, 2022