A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

Overview

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator

MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for reinforcement learning-based trading algorithms. MetaTrader 5 is a multi-asset platform that allows trading Forex, Stocks, Crypto, and Futures. It is one of the most popular trading platforms and supports numerous useful features, such as opening demo accounts on various brokers.

The simulator is separated from the Gym environment and can work independently. Although the Gym environment is designed to be suitable for RL frameworks, it is also proper for backtesting and classic analysis.

The goal of this project was to provide a general-purpose, flexible, and easy-to-use library with a focus on code readability that enables users to do all parts of the trading process through it from 0 to 100. So, gym-mtsim is not just a testing tool or a Gym environment. It is a combination of a real-world simulator, a backtesting tool with high detail visualization, and a Gym environment appropriate for RL/classic algorithms.

Note: For beginners, it is recommended to check out the gym-anytrading project.

Prerequisites

Install MetaTrader 5

Download and install MetaTrader 5 software from here.

Open a demo account on any broker. By default, the software opens a demo account automatically after installation.

Explore the software and try to get familiar with it by trading different symbols in both hedged and unhedged accounts.

Install gym-mtsim

Via PIP

pip install gym-mtsim

From Repository

git clone https://github.com/AminHP/gym-mtsim
cd gym-mtsim
pip install -e .

## or

pip install --upgrade --no-deps --force-reinstall https://github.com/AminHP/gym-mtsim/archive/main.zip

Install stable-baselines3

This package is required to run some examples. Install it from here.

Components

1. SymbolInfo

This is a data class that contains the essential properties of a symbol. Try to get fully acquainted with these properties in case they are unfamiliar. There are plenty of resources that provide good explanations.

2. Order

This is another data class that consists of information of an order. Each order has the following properties:

id: A unique number that helps with tracking orders.

type: An enum that specifies the type of the order. It can be either Buy or Sell.

symbol: The symbol selected for the order.

volume: The volume chose for the order. It can be a multiple of volume_step between volume_min and volume_max.

fee: It is a tricky property. In MetaTrader, there is no such concept called fee. Each symbol has bid and ask prices, the difference between which represents the fee. Although MetaTrader API provides these bid/ask prices for the recent past, it is not possible to access them for the distant past. Therefore, the fee property helps to manage the mentioned difference.

entry_time: The time when the order was placed.

entry_price: The close price when the order was placed.

exit_time: The time when the order was closed.

exit_price: The close price when the order was closed.

profit: The amount of profit earned by this order so far.

margin: The required amount of margin for this order.

closed: A boolean that specifies whether this order is closed or not.

3. MtSimulator

This is the core class that simulates the main parts of MetaTrader. Most of its public properties and methods are explained here. But feel free to take a look at the complete source code.

  • Properties:

    unit: The unit currency. It is usually USD, but it can be anything the broker allows, such as EUR.

    balance: The amount of money before taking into account any open positions.

    equity: The amount of money, including the value of any open positions.

    margin: The amount of money which is required for having positions opened.

    leverage: The leverage ratio.

    free_margin: The amount of money that is available to open new positions.

    margin_level: The ratio between equity and margin.

    stop_out_level: If the margin_level drops below stop_out_level, the most unprofitable position will be closed automatically by the broker.

    hedge: A boolean that specifies whether hedging is enabled or not.

    symbols_info: A dictionary that contains symbols' information.

    symbols_data: A dictionary that contains symbols' OHLCV data.

    orders: The list of open orders.

    closed_orders: The list of closed orders.

    current_time: The current time of the system.

  • Methods:

    download_data: Downloads required data from MetaTrader for a list of symbols in a time range. This method can be overridden in order to download data from servers other than MetaTrader.

    save_symbols: Saves the downloaded symbols' data to a file.

    load_symbols: Loads the symbols' data from a file.

    tick: Moves forward in time (by a delta time) and updates orders and other related properties.

    create_order: Creates a Buy or Sell order and updates related properties.

    close_order: Closes an order and updates related properties.

    get_state: Returns the state of the system. The result is similar to the Trading tab and History tab of the Toolbox window in MetaTrader software.

4. MtEnv

This is the Gym environment that works on top of the MtSim. Most of its public properties and methods are explained here. But feel free to take a look at the complete source code.

  • Properties:

    original_simulator: An instance of MtSim class as a baseline for simulating the system.

    simulator: The current simulator in use. It is a copy of the original_simulator.

    trading_symbols: The list of symbols to trade.

    time_points: A list of time points based on which the simulator moves time. The default value is taken from the pandas DataFrame.Index of the first symbol in the trading_symbols list.

    hold_threshold: A probability threshold that controls holding or placing a new order.

    close_threshold: A probability threshold that controls closing an order.

    fee: A constant number or a callable that takes a symbol as input and returns the fee based on that.

    symbol_max_orders: Specifies the maximum number of open positions per symbol in hedge trading.

    multiprocessing_processes: Specifies the maximum number of processes used for parallel processing.

    prices: The symbol prices over time. It is used to calculate signal features and render the environment.

    signal_features: The extracted features over time. It is used to generate Gym observations.

    window_size: The number of time points (current and previous points) as the length of each observation's features.

    features_shape: The shape of a single observation's features.

    action_space: The Gym action_space property. It has a complex structure since stable-baselines does not support Dict or 2D Box action spaces. The action space is a 1D vector of size count(trading_symbols) * (symbol_max_orders + 2). For each symbol, two types of actions can be performed, closing previous orders and placing a new order. The latter is controlled by the first symbol_max_orders elements and the former is controlled by the last two elements. Therefore, the action for each symbol is [probability of closing order 1, probability of closing order 2, ..., probability of closing order symbol_max_orders, probability of holding, volume of new order]. The last two elements specify whether to hold or place a new order and the volume of the new order (positive volume indicates buy and negative volume indicates sell). These elements are a number in range (-∞, ∞), but the probability values must be in the range [0, 1]. This is a problem with stable-baselines as mentioned earlier. To overcome this problem, it is assumed that the probability values belong to the logit function. So, applying the expit function on them gives the desired probability values in the range [0, 1]. This function is applied in the step method of the environment.

    observation_space: The Gym observation_space property. Each observation contains information about balance, equity, margin, features, and orders. The features is a window on the signal_features from index current_tick - window_size + 1 to current_tick. The orders is a 3D array. Its first dimension specifies the symbol index in the trading_symbols list. The second dimension specifies the order number (each symbol can have more than one open order at the same time in hedge trading). The last dimension has three elements, entry_price, volume, and profit of corresponding order.

    history: Stores the information of all steps.

  • Methods:

    seed: The typical Gym seed method.

    reset: The typical Gym reset method.

    step: The typical Gym step method.

    render: The typical Gym render method. It can render in three modes, human, simple_figure, and advanced_figure.

    close: The typical Gym close method.

  • Virtual Methods:

    _get_prices: It is called in the constructor and calculates symbol prices.

    _process_data: It is called in the constructor and calculates signal_features.

    _calculate_reward: The reward function for the RL agent.

A Simple Example

MtSim

Create a simulator with custom parameters

import pytz
from datetime import datetime, timedelta
from gym_mtsim import MtSimulator, OrderType, Timeframe, FOREX_DATA_PATH


sim = MtSimulator(
    unit='USD',
    balance=10000.,
    leverage=100.,
    stop_out_level=0.2,
    hedge=False,
)

if not sim.load_symbols(FOREX_DATA_PATH):
    sim.download_data(
        symbols=['EURUSD', 'GBPCAD', 'GBPUSD', 'USDCAD', 'USDCHF', 'GBPJPY', 'USDJPY'],
        time_range=(
            datetime(2021, 5, 5, tzinfo=pytz.UTC),
            datetime(2021, 9, 5, tzinfo=pytz.UTC)
        ),
        timeframe=Timeframe.D1
    )
    sim.save_symbols(FOREX_DATA_PATH)

Place some orders

sim.current_time = datetime(2021, 8, 30, 0, 17, 52, tzinfo=pytz.UTC)

order1 = sim.create_order(
    order_type=OrderType.Buy,
    symbol='GBPCAD',
    volume=1.,
    fee=0.0003,
)

sim.tick(timedelta(days=2))

order2 = sim.create_order(
    order_type=OrderType.Sell,
    symbol='USDJPY',
    volume=2.,
    fee=0.01,
)

sim.tick(timedelta(days=5))

state = sim.get_state()

print(
    f"balance: {state['balance']}, equity: {state['equity']}, margin: {state['margin']}\n"
    f"free_margin: {state['free_margin']}, margin_level: {state['margin_level']}\n"
)
state['orders']
balance: 10000.0, equity: 10717.58118589908, margin: 3375.480933228619
free_margin: 7342.1002526704615, margin_level: 3.1751271592500743
Id Symbol Type Volume Entry Time Entry Price Exit Time Exit Price Profit Margin Fee Closed
0 2 USDJPY Sell 2.0 2021-09-01 00:17:52+00:00 110.02500 2021-09-06 00:17:52+00:00 109.71200 552.355257 2000.000000 0.0100 False
1 1 GBPCAD Buy 1.0 2021-08-30 00:17:52+00:00 1.73389 2021-09-06 00:17:52+00:00 1.73626 165.225928 1375.480933 0.0003 False

Close all orders

order1_profit = sim.close_order(order1)
order2_profit = sim.close_order(order2)

# alternatively:
# for order in sim.orders:
#     sim.close_order(order)

state = sim.get_state()

print(
    f"balance: {state['balance']}, equity: {state['equity']}, margin: {state['margin']}\n"
    f"free_margin: {state['free_margin']}, margin_level: {state['margin_level']}\n"
)
state['orders']
balance: 10717.58118589908, equity: 10717.58118589908, margin: 0.0
free_margin: 10717.58118589908, margin_level: inf
Id Symbol Type Volume Entry Time Entry Price Exit Time Exit Price Profit Margin Fee Closed
0 2 USDJPY Sell 2.0 2021-09-01 00:17:52+00:00 110.02500 2021-09-06 00:17:52+00:00 109.71200 552.355257 2000.000000 0.0100 True
1 1 GBPCAD Buy 1.0 2021-08-30 00:17:52+00:00 1.73389 2021-09-06 00:17:52+00:00 1.73626 165.225928 1375.480933 0.0003 True

MtEnv

Create an environment

import gym
import gym_mtsim

env = gym.make('forex-hedge-v0')
# env = gym.make('stocks-hedge-v0')
# env = gym.make('crypto-hedge-v0')
# env = gym.make('mixed-hedge-v0')

# env = gym.make('forex-unhedge-v0')
# env = gym.make('stocks-unhedge-v0')
# env = gym.make('crypto-unhedge-v0')
# env = gym.make('mixed-unhedge-v0')
  • This will create a default environment. There are eight default environments, but it is also possible to create environments with custom parameters.

Create an environment with custom parameters

import pytz
from datetime import datetime, timedelta
import numpy as np
from gym_mtsim import MtEnv, MtSimulator, FOREX_DATA_PATH


sim = MtSimulator(
    unit='USD',
    balance=10000.,
    leverage=100.,
    stop_out_level=0.2,
    hedge=True,
    symbols_filename=FOREX_DATA_PATH
)

env = MtEnv(
    original_simulator=sim,
    trading_symbols=['GBPCAD', 'EURUSD', 'USDJPY'],
    window_size=10,
    # time_points=[desired time points ...],
    hold_threshold=0.5,
    close_threshold=0.5,
    fee=lambda symbol: {
        'GBPCAD': max(0., np.random.normal(0.0007, 0.00005)),
        'EURUSD': max(0., np.random.normal(0.0002, 0.00003)),
        'USDJPY': max(0., np.random.normal(0.02, 0.003)),
    }[symbol],
    symbol_max_orders=2,
    multiprocessing_processes=2
)

Print some information

print("env information:")

for symbol in env.prices:
    print(f"> prices[{symbol}].shape:", env.prices[symbol].shape)

print("> signal_features.shape:", env.signal_features.shape)
print("> features_shape:", env.features_shape)
env information:
> prices[GBPCAD].shape: (88, 2)
> prices[EURUSD].shape: (88, 2)
> prices[USDJPY].shape: (88, 2)
> signal_features.shape: (88, 6)
> features_shape: (10, 6)

Trade randomly

observation = env.reset()

while True:
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

    if done:
        # print(info)
        print(
            f"balance: {info['balance']}, equity: {info['equity']}, margin: {info['margin']}\n"
            f"free_margin: {info['free_margin']}, margin_level: {info['margin_level']}\n"
            f"step_reward: {info['step_reward']}"
        )
        break
balance: 9376.891775198916, equity: 9641.936625205548, margin: 3634.1077619051393
free_margin: 6007.828863300409, margin_level: 2.6531785122131852
step_reward: 140.93306243685583

Render in human mode

state = env.render()

print(
    f"balance: {state['balance']}, equity: {state['equity']}, margin: {state['margin']}\n"
    f"free_margin: {state['free_margin']}, margin_level: {state['margin_level']}\n"
)
state['orders']
balance: 9376.891775198916, equity: 9641.936625205548, margin: 3634.1077619051393
free_margin: 6007.828863300409, margin_level: 2.6531785122131852
Id Symbol Type Volume Entry Time Entry Price Exit Time Exit Price Profit Margin Fee Closed
0 119 USDJPY Buy 1.12 2021-09-02 00:00:00+00:00 109.93700 2021-09-03 00:00:00+00:00 109.71200 -248.970123 1120.000000 0.018884 False
1 118 EURUSD Buy 0.24 2021-09-02 00:00:00+00:00 1.18744 2021-09-03 00:00:00+00:00 1.18772 -4.355531 284.985600 0.000461 False
2 117 USDJPY Sell 1.94 2021-09-01 00:00:00+00:00 110.02500 2021-09-03 00:00:00+00:00 109.71200 520.155098 1940.000000 0.018839 False
3 116 GBPCAD Sell 0.21 2021-09-01 00:00:00+00:00 1.73728 2021-09-03 00:00:00+00:00 1.73626 -1.784594 289.122162 0.001126 False
4 113 USDJPY Sell 2.24 2021-08-30 00:00:00+00:00 109.91300 2021-09-01 00:00:00+00:00 110.02500 -258.362674 2240.000000 0.014903 True
... ... ... ... ... ... ... ... ... ... ... ... ...
114 6 USDJPY Sell 1.03 2021-05-21 00:00:00+00:00 108.94500 2021-05-24 00:00:00+00:00 108.74000 173.893295 1030.000000 0.021416 True
115 3 EURUSD Buy 0.86 2021-05-19 00:00:00+00:00 1.21744 2021-05-24 00:00:00+00:00 1.22150 352.419311 1046.998400 -0.000038 True
116 5 GBPCAD Sell 0.94 2021-05-21 00:00:00+00:00 1.70726 2021-05-24 00:00:00+00:00 1.70440 174.119943 1330.148695 0.000629 True
117 1 GBPCAD Buy 1.45 2021-05-18 00:00:00+00:00 1.71128 2021-05-24 00:00:00+00:00 1.70440 -961.496723 2056.809874 0.001105 True
118 2 GBPCAD Sell 0.58 2021-05-19 00:00:00+00:00 1.71211 2021-05-21 00:00:00+00:00 1.70726 219.514676 818.590377 0.000284 True

119 rows × 12 columns

Render in simple_figure mode

  • Each symbol is illustrated with a separate color.
  • The green/red triangles show successful buy/sell actions.
  • The gray triangles indicate that the buy/sell action has encountered an error.
  • The black vertical bars specify close actions.
env.render('simple_figure')

png

Render in advanced_figure mode

  • Clicking on a symbol name will hide/show its plot.
  • Hovering over points and markers will display their detail.
  • The size of triangles indicates their relative volume.
env.render('advanced_figure', time_format="%Y-%m-%d")

png

A Complete Example using stable-baselines

import gym
from gym_mtsim import (
    Timeframe, SymbolInfo,
    MtSimulator, OrderType, Order, SymbolNotFound, OrderNotFound,
    MtEnv,
    FOREX_DATA_PATH, STOCKS_DATA_PATH, CRYPTO_DATA_PATH, MIXED_DATA_PATH,
)
from stable_baselines3 import A2C


# env = gym.make('forex-hedge-v0')

model = A2C('MultiInputPolicy', env, verbose=0)
model.learn(total_timesteps=1000)

observation = env.reset()
while True:
    action, _states = model.predict(observation)
    observation, reward, done, info = env.step(action)
    if done:
        break

env.render('advanced_figure', time_format="%Y-%m-%d")

png

Reference

Comments
  • Question: Is there an 'easy' way to set Order properties?

    Question: Is there an 'easy' way to set Order properties?

    Specifically, how do I set 'volume', 'volume_step', 'volume_min' and 'volume_max' without creating a child of the Order class? I need to be able to set these values potentially for each order.

    opened by snafu4 17
  • BUG: sim._check_volume

    BUG: sim._check_volume

    hello @AminHP , There is a bug in the check volume function inside the simulator, you write :

     def _check_volume(self, symbol: str, volume: float) -> None:
            symbol_info = self.symbols_info[symbol]
            if not (symbol_info.volume_min <= volume <= symbol_info.volume_max):
                raise ValueError(
                    f"'volume' must be in range [{symbol_info.volume_min}, {symbol_info.volume_max}]"
                )
            if not round(volume / symbol_info.volume_step, 6).is_integer():
                raise ValueError(f"'volume' must be a multiple of {symbol_info.volume_step}")
    

    you are rounding the volume to 6 decimals and expecting an integer as a result, this can only be true if the passed volume was an integer, your error message says mutiple of 0.01 (volume step), this also contradicts with your voume check function inside env._get_modified_volume where you expect volume to be mutiple of 0.01 as well.

    opened by sadimoodi 5
  • issue with multi processing

    issue with multi processing

    Hello @AminHP , when running one of your examples (Create an environment with custom parameters), i get the follwoing error: File "", line 1, in File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Program Files\Python39\lib\runpy.py", line 268, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Program Files\Python39\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Program Files\Python39\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "c:\Users\Ali.Khankan\Desktop\gym-mtsim\ali_test.py", line 16, in env = MtEnv( File "c:\Users\Ali.Khankan\Desktop\gym-mtsim\gym_mtsim\envs\mt_env.py", line 63, in init self.multiprocessing_pool = Pool(multiprocessing_processes) if multiprocessing_processes else None File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\pathos\multiprocessing.py", line 111, in init self._serve() File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\pathos\multiprocessing.py", line 123, in _serve _pool = Pool(nodes) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\pool.py", line 212, in init self._repopulate_pool() File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\pool.py", line 303, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\pool.py", line 326, in _repopulate_pool_static w.start() File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\popen_spawn_win32.py", line 45, in init prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\Ali.Khankan\AppData\Roaming\Python\Python39\site-packages\multiprocess\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:
    
            if __name__ == '__main__':
                freeze_support()
                ...
    
        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
    

    setting multiprocessing_processes = 0 solved the problem

    opened by sadimoodi 4
  • understanding action_space

    understanding action_space

    Hello @AminHP , I am struggeling to understand your action space, you say quote: " The action space is a 1D vector of size count(trading_symbols) * (symbol_max_orders + 2). For each symbol, two types of actions can be performed, closing previous orders and placing a new order. The latter is controlled by the first symbol_max_orders elements and the former is controlled by the last two elements."

    you also write:

    self.action_space = spaces.Box(
                low=-np.inf, high=np.inf,
                shape=(len(self.trading_symbols) * (self.symbol_max_orders + 2),)
            )  #
    

    why do you +2 to the symbole_max_orders? what are the last 2 elements you are referring to?

    opened by sadimoodi 4
  • how can I pass through the observation_space as Dict into Keras DQN model

    how can I pass through the observation_space as Dict into Keras DQN model

    I have tried many time and method for this. I have writen the code below, could you please check for me whether I have done something wrong. In this case I left only this 2 informations for the experiment

    model_balance = Sequential() model_balance.add(Flatten(input_shape=(1,1,1), name='balance')) model_balance_input = Input(shape=(1,1), name='balance') model_balance_encoded = model_balance(model_balance_input)

    model_equity = Sequential() model_equity.add(Flatten(input_shape=(1,1,1), name='equity')) model_equity_input = Input(shape=(1,1), name='equity') model_equity_encoded = model_Equity(model_equity_input)

    con = concatenate([model_Balance_encoded, model_Equity_encoded])

    dense = Dense(1024, activation='relu')(con) dense = Dense(1024, activation='relu')(dense) output = Dense(actions, activation='softmax')(dense)

    model = Model(inputs=[model_Balance_input, model_Equity_input], outputs=output)

    memory = SequentialMemory(limit=2000000, window_length=1) policy = LinearAnnealedPolicy(EpsGreedyQPolicy()) dqn = DQNAgent(model=model, policy=policy, nb_actions=actions, memory=memory)

    dqn.processor = MultiInputProcessor(2)

    dqn.compile( optimizer = Adam(learning_rate=1e-4), metrics = ['mse'] #'accuracy', 'mae')

    dqn.fit(env, nb_steps=1000, verbose=1, visualize=False,)

    ==================================================================== Then, this problem occured, and I could not find the error. 9966

    please tell me the solution. thanks in advance

    opened by TanapongAUS 3
  • Hi, can i define my own action space?

    Hi, can i define my own action space?

    for example , my action space has 7 discrete actions, and they correspond to 7 different positions as follows: heavy long postion (80% long position size) mid long postion (50% long position size) light long postion (30% long position size) not holding any symbols, meaning position size is 0 light short postion (30% short position size) mid short postion (50% short position size) heavy short postion (80% short position size)

    In order to achieve this, what should I do?

    opened by yglpyn8888 3
  • Question: about _get_unit_ratio

    Question: about _get_unit_ratio

    Hello, Thank you so much for the simple and wonderful library. can you please explain what is this fucntion doing? why do we need to get the unit ratio to calculate order profit?

        def _get_unit_ratio(self, symbol: str, time: datetime) -> float:
            symbol_info = self.symbols_info[symbol]
            if self.unit == symbol_info.currency_profit:
                return 1.
    
            if self.unit == symbol_info.currency_margin:
                return 1 / self.price_at(symbol, time)['Close']
    
            currency = symbol_info.currency_profit
            unit_symbol_info = self._get_unit_symbol_info(currency)
            if unit_symbol_info is None:
                raise SymbolNotFound(f"unit symbol for '{currency}' not found")
    
            unit_price = self.price_at(unit_symbol_info.name, time)['Close']
            if unit_symbol_info.currency_margin == self.unit:
                unit_price = 1. / unit_price
    
            return unit_price
    
    opened by sadimoodi 3
  • sim.download_data report error

    sim.download_data report error

    Code:

    sim = MtSimulator(
        unit='USD',
        balance=10000.,
        leverage=100.,
        stop_out_level=0.2,
        hedge=False,
    )
    
    
    sim.download_data(
        symbols=['EURUSD', 'GBPCAD', 'GBPUSD', 'USDCAD', 'USDCHF', 'GBPJPY', 'XAUUSD'],
        time_range=(
            datetime(2021, 5, 5, tzinfo=pytz.UTC),
            datetime(2021, 12, 5, tzinfo=pytz.UTC)
        ),
        timeframe=Timeframe.M5
    )
    sim.save_symbols("C:\\symbol.pkl")
    

    Error:

      File "test.py", line 15, in <module>
        sim.download_data(
      File "D:\Porjects\rlmt\gym-mtsim-main\gym_mtsim\simulator\mt_simulator.py", line 59, in download_data
        si, df = retrieve_data(symbol, from_dt, to_dt, timeframe)
      File "D:\Porjects\rlmt\gym-mtsim-main\gym_mtsim\metatrader\api.py", line 20, in retrieve_data
        symbol_info = _get_symbol_info(symbol)
      File "D:\Porjects\rlmt\gym-mtsim-main\gym_mtsim\metatrader\api.py", line 53, in _get_symbol_info
        symbol_info = SymbolInfo(info)
      File "D:\Porjects\rlmt\gym-mtsim-main\gym_mtsim\metatrader\symbol.py", line 9, in __init__
        self.name: str = info.name
    AttributeError: 'NoneType' object has no attribute 'name'
    
    opened by 0trade 2
  • questions on _process_data(), _get_observation() and multi symbols

    questions on _process_data(), _get_observation() and multi symbols

    1 The _process_data() method takes the return value of _get_prices() as a feature, but the return value of _get_prices() is two columns of bar data which are close and open, and this are not features. Why? 2 _get_observation() returns a dict, but not a np.ndarray, for a typicle gym env, Shouldn't the observation returned by the step method be a np.ndarray? 3 I see many data structures in this framework use Dict, which should be to support multiple symbols? I only load data of a single symbol, so should I change the underlying data structure,? or just put a single key (the single symbol i focus on) in the dict.

    opened by yglpyn8888 2
  • Overriding signal_features

    Overriding signal_features

    Hi Amin,

    On gym-anytradnig it's possible to override _process_data and add your own custom indicators (from Finta etc).

    Is it possible to do the same with mtsim - or how should I be thinking about this? Currently I can't even see how to add and format my own training data.

    opened by dancydancy 2
  • [Question] - Why different results each time loaded model is run?

    [Question] - Why different results each time loaded model is run?

    Any ideas why the code below (it is basically the same as the code in the README.md file) would return different results each time it is run after the model has been trained? The trained model is no longer being updated and the input (trades) do not change between runs.

    ` model = A2C.load(modelName, env)

    observation = env.reset()
    while True:
        action, _states = model.predict(observation)
        observation, reward, done, info = env.step(action)
        
        state = env.render()
        z = calcsomevalue(state)
        if done:
           x  = getsomevalue(z)
           y = calsomevalue(z)
            print(<results>)
            break
    

    `

    opened by snafu4 2
  • example: environment with custom parameters

    example: environment with custom parameters

    this example does not open any trades

    balance: 10000.0, equity: 10000.0, margin: 0.0
    free_margin: 10000.0, margin_level: inf
    

    just different from expected results you wrote

    opened by kruzel 1
  • A Complete Example using stable-baselines

    A Complete Example using stable-baselines

    ---------------------------------------------------------------------------
    AssertionError                            Traceback (most recent call last)
    [<ipython-input-6-4ca9ec63cf36>](https://localhost:8080/#) in <module>()
         11 # env = gym.make('crypto-hedge-v0')
         12 
    ---> 13 model = A2C('MultiInputPolicy', env, verbose=0)
         14 model.learn(total_timesteps=1000)
         15 
    
    2 frames
    [/usr/local/lib/python3.7/dist-packages/stable_baselines3/common/base_class.py](https://localhost:8080/#) in __init__(self, policy, env, learning_rate, policy_kwargs, tensorboard_log, verbose, device, support_multi_env, create_eval_env, monitor_wrapper, seed, use_sde, sde_sample_freq, supported_action_spaces)
        189                 assert np.all(
        190                     np.isfinite(np.array([self.action_space.low, self.action_space.high]))
    --> 191                 ), "Continuous action space must have a finite lower and upper bound"
        192 
        193     @staticmethod
    
    AssertionError: Continuous action space must have a finite lower and upper bound
    

    Where can set the lower and upper bound? Thank you

    opened by Error4046716 4
  • Low bound of spaces.Box

    Low bound of spaces.Box

    Hello @AminHP , why is the low bound of balance, equity and features is -np.inf? see code:

            self.observation_space = spaces.Dict({
                'balance': spaces.Box(low=-np.inf, high=np.inf, shape=(1,)),
                'equity': spaces.Box(low=-np.inf, high=np.inf, shape=(1,)),
                'margin': spaces.Box(low=-np.inf, high=np.inf, shape=(1,)),
                'features': spaces.Box(low=-np.inf, high=np.inf, shape=self.features_shape),
                'orders': spaces.Box(
                    low=-np.inf, high=np.inf,
                    shape=(len(self.trading_symbols), self.symbol_max_orders, 3)
                )  # symbol, order_i -> [entry_price, volume, profit]
            })
    

    shouldnt low bound start from zero? meaning its impossible for balance be below 0

    opened by sadimoodi 1
  • Leverage as part of action space

    Leverage as part of action space

    Hello @AminHP , I noticed that you are using a constant leverage of 100 during the construction of the MtSimulator object, as you know leverage could be variant on the order level, means every order could have a different leverage, did you think of making leverage as part of the action space? I was thinking that allowing the agent to decide on the leverage (1x,5x,10x...etc) could allow it to make better decisions?

    opened by sadimoodi 4
Releases(v1.2.0)
Owner
Mohammad Amin Haghpanah
Mohammad Amin Haghpanah
An OpenAI Gym environment for Super Mario Bros

gym-super-mario-bros An OpenAI Gym environment for Super Mario Bros. & Super Mario Bros. 2 (Lost Levels) on The Nintendo Entertainment System (NES) us

Andrew Stelmach 1 Jan 5, 2022
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
Implementation of the GVP-Transformer, which was used in the paper "Learning inverse folding from millions of predicted structures" for de novo protein design alongside Alphafold2

GVP Transformer (wip) Implementation of the GVP-Transformer, which was used in the paper Learning inverse folding from millions of predicted structure

Phil Wang 19 May 6, 2022
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

Facebook Research 255 Dec 27, 2022
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Hong-Jia Chen 91 Dec 2, 2022
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
Deep Q Learning with OpenAI Gym and Pokemon Showdown

pokemon-deep-learning An openAI gym project for pokemon involving deep q learning. Made by myself, Sam Little, and Layton Webber. This code captures g

null 2 Dec 22, 2021
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
a general-purpose Transformer based vision backbone

Swin Transformer By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo. This repo is the official implement

Microsoft 9.9k Jan 8, 2023
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation This is a demo implementation of BYOL for Audio (BYOL-A), a self-sup

NTT Communication Science Laboratories 160 Jan 4, 2023
A task-agnostic vision-language architecture as a step towards General Purpose Vision

Towards General Purpose Vision Systems By Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem Overview Welcome to the official code base f

AI2 79 Dec 23, 2022
ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs.

ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representa

Bats Research 94 Nov 21, 2022
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends)

General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.

The Kompute Project 1k Jan 6, 2023
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 8, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

null 1 Nov 10, 2021
Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies.

Crypto_Bot Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies. Steps to get started using the bot: Sign up

null 21 Oct 3, 2022