Scalable, event-driven, deep-learning-friendly backtesting library

Overview
...Minimizing the mean square error on future experience.  - Richard S. Sutton

BTGym

Scalable event-driven RL-friendly backtesting library. Build on top of Backtrader with OpenAI Gym environment API.

Backtrader is open-source algorithmic trading library:
GitHub: http://github.com/mementum/backtrader
Documentation and community:
http://www.backtrader.com/

OpenAI Gym is..., well, everyone knows Gym:
GitHub: http://github.com/openai/gym
Documentation and community:
https://gym.openai.com/


Outline

General purpose of this project is to provide gym-integrated framework for running reinforcement learning experiments in [close to] real world algorithmic trading environments.

DISCLAIMER:
Code presented here is research/development grade.
Can be unstable, buggy, poor performing and is subject to change.

Note that this package is neither out-of-the-box-moneymaker, nor it provides ready-to-converge RL solutions.
Think of it as framework for setting experiments with complex non-stationary stochastic environments.

As a research project BTGym in its current stage can hardly deliver easy end-user experience in as sense that
setting meaninfull  experiments will require some practical programming experience as well as general knowledge
of reinforcement learning theory.

News and update notes


Contents


Installation

It is highly recommended to run BTGym in designated virtual environment.

Clone or copy btgym repository to local disk, cd to it and run: pip install -e . to install package and all dependencies:

git clone https://github.com/Kismuz/btgym.git

cd btgym

pip install -e .

To update to latest version::

cd btgym

git pull

pip install --upgrade -e .
Notes:
  1. BTGym requres Matplotlib version 2.0.2, downgrade your installation if you have version 2.1:

    pip install matplotlib==2.0.2

  2. LSOF utility should be installed to your OS, which can not be the default case for some Linux distributives, see: https://en.wikipedia.org/wiki/Lsof


Quickstart

Making gym environment with all parmeters set to defaults is as simple as:

from btgym import BTgymEnv

MyEnvironment = BTgymEnv(filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',)

Adding more controls may look like:

from gym import spaces
from btgym import BTgymEnv

MyEnvironment = BTgymEnv(filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
                         episode_duration={'days': 2, 'hours': 23, 'minutes': 55},
                         drawdown_call=50,
                         state_shape=dict(raw=spaces.Box(low=0,high=1,shape=(30,4))),
                         port=5555,
                         verbose=1,
                         )
See more options at Documentation: Quickstart >>
and how-to's in Examples directory >>.

General description

Problem setting

  • Discrete actions setup: consider setup with one riskless asset acting as broker account cash and K (by default - one) risky assets. For every risky asset there exists track of historic price records referred as data-line. Apart from assets data lines there [optionally] exists number of exogenous data lines holding some information and statistics, e.g. economic indexes, encoded news, macroeconomic indicators, weather forecasts etc. which are considered relevant to decision-making. It is supposed for this setup that:

    1. there is no interest rates for any asset;
    2. broker actions are fixed-size market orders (buy, sell, close); short selling is permitted;
    3. transaction costs are modelled via broker commission;
    4. 'market liquidity' and 'capital impact' assumptions are met;
    5. time indexes match for all data lines provided;
  • The problem is modelled as discrete-time finite-horizon partially observable Markov decision process for equity/currency trading:

    • for every asset traded agent action space is discrete (0: hold [do nothing], 1:buy, 2: sell, 3:close [position]);
    • environment is episodic: maximum episode duration and episode termination conditions are set;
    • for every timestep of the episode agent is given environment state observation as tensor of last m time-embedded preprocessed values for every data-line included and emits actions according some stochastic policy.
    • agent's goal is to maximize expected cumulative capital by learning optimal policy;
  • Continuous actions setup[BETA]: this setup closely relates to continuous portfolio optimisation problem definition; it differs from setup above in:

    1. base broker actions are real numbers: a[i] in [0,1], 0<=i<=K, SUM{a[i]} = 1 for K risky assets added; each action is a market target order to adjust portfolio to get share a[i]*100% for i-th asset;
    2. entire single-step broker action is dictionary of form: {cash_name: a[0], asset_name_1: a[1], ..., asset_name_K: a[K]};
    3. short selling is not permitted;
  • For RL it implies having continuous action space as K+1 dim vector.

Data selection options for backtest agent training:

Notice: data shaping approach is under development, expect some changes. [7.01.18]

  • random sampling: historic price change dataset is divided to training, cross-validation and testing subsets. Since agent actions do not influence market, it is possible to randomly sample continuous subset of training data for every episode. [Seems to be] most data-efficient method. Cross-validation and testing performed later as usual on most "recent" data;
  • sequential sampling: full dataset is feeded sequentially as if agent is performing real-time trading, episode by episode. Most reality-like, least data-efficient, natural non-stationarity remedy.
  • sliding time-window sampling: mixture of above, episde is sampled randomly from comparatively short time period, sliding from furthest to most recent training data. Should be less prone to overfitting than random sampling.

Documentation and Community


Known bugs and limitations:

  • requres Matplotlib version 2.0.2;
  • matplotlib backend warning: appears when importing pyplot and using %matplotlib inline magic before btgym import. It's recommended to import btacktrader and btgym first to ensure proper backend choice;
  • not tested with Python < 3.5;
  • doesn't seem to work correctly under Windows; partially done
  • by default, is configured to accept Forex 1 min. data from www.HistData.com;
  • only random data sampling is implemented;
  • no built-in dataset splitting to training/cv/testing subsets; done
  • only one equity/currency pair can be traded done
  • no 'skip-frames' implementation within environment; done
  • no plotting features, except if using pycharm integration observer. Not sure if it is suited for intraday strategies. [partially] done
  • making new environment kills all processes using specified network port. Watch out your jupyter kernels. fixed

TODO's and Road Map:

  • refine logic for parameters applying priority (engine vs strategy vs kwargs vs defaults);
  • API reference;
  • examples;
  • frame-skipping feature;
  • dataset tr/cv/t approach;
  • state rendering;
  • proper rendering for entire episode;
  • tensorboard integration;
  • multiply agents asynchronous operation feature (e.g for A3C):
  • dedicated data server;
  • multi-modal observation space shape;
  • A3C implementation for BTgym;
  • UNREAL implementation for BTgym;
  • PPO implementation for BTgym;
  • RL^2 / MAML / DARLA adaptations - IN PROGRESS;
  • learning from demonstrations; - partially done
  • risk-sensitive agents implementation;
  • sequential and sliding time-window sampling;
  • multiply instruments trading;
  • docker image; - CPU version, Signalprime contribution,
  • TF serving model serialisation functionality;

News and updates:

  • 10.01.2019:

  • 9.02.2019:

  • 25.01.2019: updates:

    • lstm_policy class now requires both internal and external observation sub-spaces to be present and allows both be one-level nested sub-spaces itself (was only true for external); all declared sub-spaces got encoded by separate convolution encoders;
    • policy deterministic action option is implemented for discrete action spaces and can be utilised by syncro_runner; by default it is enabled for test episodes;
    • data_feed classes now accept pd.dataframes as historic data dource via dataframe kwarg (was: .csv files only);
  • 18.01.2019: updates:

    • data model classes are under active development to power model-based framework:
      • common statistics incremental estimator classes has been added (mean, variance, covariance, linear regression etc.);
      • incremental Singular Spectrum Analysis class implemented;
      • for a pair of asset prices, two-factor state-space model is proposed
    • new data_feed iterator classes has been added to provide training framework with synthetic data generated by model mentioned above;
    • strategy_gen_6 data handling and pre-processing has been redesigned:
      • market data SSA decomposition;
      • data model state as additional input to policy
      • variance-based normalisation for broker statistics
  • 11.12.2018: updates and fixes:

  • 17.11.2018: updates and fixes:

    • minor fixes to base data provider class episode sampling
    • update to btgym.datafeed.synthetic subpackage: new stochastic processes generators added etc.
    • new btgym.research.startegy_gen_5 subpackage: efficient parameter-free signal preprocessing implemented, other minor improvements
  • 30.10.2018: updates and fixes:

    • fixed numpy random state issue causing replicating of seeds among workers on POSIX os
    • new synthetic datafeed generators - added simple Ornshtein-Uhlenbeck process data generating classes; see btgym/datafeed/synthetic/ou.py and btgym/research/ou_params_space_eval for details;
  • 14.10.2018: update:

    • base reward function redesign -> noticeable algorithms performance gain;
  • 20.07.2018: major update to package:

  • 17.02.18: First results on applying guided policy search ideas (GPS) to btgym setup can be seen here.

    • tensorboard summaries are updated with additional renderings: actions distribution, value function and LSTM_state; presented in the same notebook.
  • 6.02.18: Common update to all a3c agents architectures:

    • all dense layers are now Noisy-Net ones, see: Noisy Networks for Exploration paper by Fortunato at al.;

    • note that entropy regularization is still here, kept in ~0.01 to ensure proper exploration;

    • policy output distribution is 'centered' using layer normalisation technique;

      • all of the above results in about 2x training speedup in terms of train iterations;
  • 20.01.18: Project Wiki pages added;

  • 12.01.18: Minor fixes to logging, enabled BTgymDataset train/test data split. AAC framework train/test cycle enabled via episode_train_test_cycle kwarg.

  • 7.01.18: Update:

    • Major data pipe redesign. Domain -> Trial -> Episode sampling routine implemented. For motivation and formal definitions refer to Section 1.Data of this DRAFT, API Documentation and Intro example. Changes should be backward compatible. In brief, it is necessry framework for upcoming meta-learning algorithms.
    • logging changes: now relying in python logbook module. Should eliminate errors under Windows.
    • Stacked_LSTM_Policy agent implemented. Based on NAV_A3C from DeepMind paper with some minor mods. Basic usage Example is here. Still in research code area and need further tuning; yet faster than simple LSTM agent, able to converge on 6-month 1m dataset.
  • 5.12.17: Inner btgym comm. fixes >> speedup ~5%.

  • 02.12.17: Basic sliding time-window train/test framework implemented via BTgymSequentialTrial() class. UPD: replaced by BTgymSequentialDataDomain class.

  • 29.11.17: Basic meta-learning RL^2 functionality implemented.

  • 24.11.17: A3C/UNREAL finally adapted to work with BTGym environments.

    • Examples with synthetic simple data(sine wawe) and historic financial data added, see examples directory;
    • Results on potential-based functions reward shaping in /research/DevStartegy_4_6;
    • Work on Sequential/random Trials Data iterators (kind of sliding time-window) in progress, start approaching the toughest part: non-stationarity battle is ahead.
  • 14.11.17: BaseAAC framework refraction; added per worker batch-training option and LSTM time_flatten option; Atari examples updated; see Documentation for details.

  • 30.10.17: Major update, some backward incompatibility:

    • BTGym now can be thougt as two-part package: one is environment itself and the other one is RL algoritms tuned for solving algo-trading tasks. Some basic work on shaping of later is done. Three advantage actor-critic style algorithms are implemented: A3C itself, it's UNREAL extension and PPO. Core logic of these seems to be implemented correctly but further extensive BTGym-tuning is ahead. For now one can check atari tests.
    • Finally, basic documentation and API reference is now available.
  • 27.09.17: A3C test_4.2 added:

    • some progress on estimator architecture search, state and reward shaping;
  • 22.09.17: A3C test_4 added:

    • passing train convergence test on small (1 month) dataset of EURUSD 1-minute bar data;
  • 20.09.17: A3C optimised sine-wave test added here.

    • This notebook presents some basic ideas on state presentation, reward shaping, model architecture and hyperparameters choice. With those tweaks sine-wave sanity test is converging faster and with greater stability.
  • 31.08.17: Basic implementation of A3C algorithm is done and moved inside BTgym package.

    • algorithm logic consistency tests are passed;
    • still work in early stage, experiments with obs. state features and policy estimator architecture ahead;
    • check out examples/a3c directory.
  • 23.08.17: filename arg in environment/dataset specification now can be list of csv files.

    • handy for bigger dataset creation;
    • data from all files are concatenated and sampled uniformly;
    • no record duplication and format consistency checks preformed.
  • 21.08.17: UPDATE: BTgym is now using multi-modal observation space.

    • space used is simple extension of gym: DictSpace(gym.Space) - dictionary (not nested yet) of core gym spaces.
    • defined in btgym/spaces.py.
    • raw_state is default Box space of OHLC prices. Subclass BTgymStrategy and override get_state() method to compute alll parts of env. observation.
    • rendering can now be performed for avery entry in observation dictionary as long as it is Box ranked <=3 and same key is passed in reneder_modes kwarg of environment. 'Agent' mode renamed to 'state'. See updated examples.
  • 07.08.17: BTgym is now optimized for asynchronous operation with multiply environment instances.

    • dedicated data_server is used for dataset management;
    • improved overall internal network connection stability and error handling;
    • see example async_btgym_workers.ipynb in examples directory.
  • 15.07.17: UPDATE, BACKWARD INCOMPATIBILITY: now state observation can be tensor of any rank.

    • Consequently, dim. ordering convention has changed to ensure compatibility with existing tf models: time embedding is first dimension from now on, e.g. state with shape (30, 20, 4) is 30x steps time embedded with 20 features and 4 'channels'. For the sake of 2d visualisation only one 'cannel' can be rendered, can be chosen by setting env. kwarg render_agent_channel=0;
    • examples are updated;
    • better now than later.
  • 11.07.17: Rendering battle continues: improved stability while low in memory, added environment kwarg render_enabled=True; when set to False - all renderings are disabled. Can help with performance.

  • 5.07.17: Tensorboard monitoring wrapper added; pyplot memory leak fixed.

  • 30.06.17: EXAMPLES updated with 'Setting up: full throttle' how-to.

  • 29.06.17: UPGRADE: be sure to run pip install --upgrade -e .

    • major rendering rebuild: updated with modes: human, agent, episode; render process now performed by server and returned to environment as rgb numpy array. Pictures can be shown either via matplolib or as pillow.Image(preferred).
    • 'Rendering HowTo' added, 'Basic Settings' example updated.
    • internal changes: env. state divided on raw_state - price data, and state - featurized representation. get_raw_state() method added to strategy.
    • new packages requirements: matplotlib and pillow.
  • 25.06.17: Basic rendering implemented.

  • 23.06.17: alpha 0.0.4: added skip-frame feature, redefined parameters inheritance logic, refined overall stability;

  • 17.06.17: first working alpha v0.0.2.

profile for Andrew Muzikin on Stack Exchange, a network of free, community-driven Q&A sites

Comments
  • Using 5-,10-,30-minute data feed rather than 1-minute and the timeframe argument in BTgymRandomDataDomain

    Using 5-,10-,30-minute data feed rather than 1-minute and the timeframe argument in BTgymRandomDataDomain

    HI, I spent some time looking over the guided_a3c example, documentation and reading the code for the BTgymRandomDataDomain class, and my question is: if I want to use pandas.resample() function and downsample the histdata.com data to lower frequency (for example to 5M frequency) and feed it to btgym, is the only thing I need to change is the timeframe argument when initiating BTgymRandomDataDomain? My thought was that 1M data is very noisy and does not add much information relative to 5M data, so training on 5M (or even 30M, as to me everything with higher frequency does not add much) data should be more fruitful. To compensate, one can use more than one year worth of data. Am I wrong?

    I am sorry in advance if the question appears to be stupid (I am quite a newbie) and thank you very much for the wonderful gym environment and showing examples of algorithms which I thought was only for the gods in Google Deepmind and which I would never get my hands on.

    question algorithm information discussion 
    opened by ALevitskyy 43
  • Get better signal strengths by tuning scaling hyperparameters

    Get better signal strengths by tuning scaling hyperparameters

    Hi @Kismuz I need your help to clarify a few questions:

    1. Currently, Unreal Stacked LSTM Strat 4_11 has better performance on actual FOREX prices than A3C Random, so just out of curiosity, I tried to run both of them with the sine wave data (test_sine_1min_period256_delta0002.csv). It turns out that A3C eventually converges while Unreal Stacked LSTM is nowhere near the convergence. Could you provide me some insights on that?

    LSTM results: image image

    A3C results: image image

    1. I had an overflow issue with the tanh(x_sma) while feeding my own data into the gym (SPY, Dow Jones, Russel 2000, and so on). After changing the T2 value from 2000 to 1, the issue went away but I am not sure if that was a proper fix. Could you help me shed some lights on that?

    2. I also got data consistency issue when passing equity data traded from 8:30am to 15:15pm instead of 24/7 like FOREX data that you used as examples. What should I do to fix this?

    3. I kept getting the "[WinError 2] The system cannot find the file specified" when trying to run the A3C example in Windows. image

    4. When I tried to run the A3C with the new version of tensorflow supporting GPU in Windows, I I got the error below also: image

    image

    Really appreciate your effort to build this awesome gym and great documentation.

    Thank you in advance.

    framework algorithm compatibility information discussion 
    opened by ryanle88 37
  • Bug: Environment errors related to datas

    Bug: Environment errors related to datas

    UPDATE: bug was found to be related to some problem with self.datas in BTgymMultiData. reason is still unclear. but issue is unrelated to filters as originally proposed

    I'm currently experimenting with data preprocessing on a custom strategy, mainly overriding set_datalines() and get_external_state(). my setup use multiple data sources (BTgymMultiData) and the use of backtrader filters.

    filters are applied to a datafeed and dynamically change the datafeed. I'm adding those filters in set_datalines() so all of the data manipulation happens once in the beginning of the episode.

    The concept seems to work fine but sometimes will result in one of those two errors:

    Exception in thread Thread-1:
    [2019-01-19 10:43:05.892376] ERROR: BTgymMultiDataShell_0: Unexpected environment response: No <ctrl> key received:Control mode: received <{'action': {'base': 'hold'}}>
    Traceback (most recent call last):
    Hint: forgot to call reset()?
      File "/home/jack/btgym/btgym/envs/base.py", line 612, in _assert_response
    Hint: Forgot to call reset() or reset_data()?
        assert type(response) == tuple and len(response) == 4
    
    or
    
    [2019-01-21 16:14:44.266259] NOTICE: Worker_0: started training at step: 0
    [2019-01-21 16:15:51.167404] ERROR: ThreadRunner_0: too many values to unpack (expected 4)
    Traceback (most recent call last):
      File "/home/jack/btgym/btgym/algorithms/runner/base.py", line 121, in BaseEnvRunnerFn
        state, reward, terminal, info = env.step(action['environment'])
    ValueError: too many values to unpack (expected 4)
    

    If the error occur, it only happens at the beginning of each run, so if it reach a save checkpoint it won't have an error (for that run). errors doesn't happen all the times but adding more datafeeds that have a filter attached to them seems to increase the frequency the error will occur.

    took me some time to isolate this non deterministic bug from the rest of my code. to recreate this bug one need a BTgymMultiData setup and to add a filter to the datafeed, easiest way I came up with is use multi_discrete_setup_intro example and change CasualConvStrategyMulti as follow:

    import backtrader as bt
    from backtrader.resamplerfilter import Resampler
    
    def set_datalines(self):
        self.data_streams = {
            stream._name: stream for stream in self.datas
        }
    
        self.dnames['GBP'].addfilter(Resampler, timeframe=bt.TimeFrame.Minutes, compression=10)
        self.dnames['JPY'].addfilter(Resampler, timeframe=bt.TimeFrame.Minutes, compression=10)
        self.dnames['CHF'].addfilter(Resampler, timeframe=bt.TimeFrame.Minutes, compression=10)
    
        self.data.dim_sma = btind.SimpleMovingAverage(
            self.dnames['USD'],
            period=(np.asarray(self.features_parameters).max() + self.time_dim * 10)
        )
    

    notes:

    • for simplicity I used the Resampler filter and used it to change the data from 1-min to 10-min data
    • I kept 'USD' data without filters
    • I changed dim_sma to be 10 times longer (to fit the resampler)
    • In the beginning I thought it was data related but it happens with the sin test data as well.
    • I started testing prior to your latest commit and after updating I think multi_discrete_setup_intro have other unrelated issue (but didn't investigate)

    @Kismuz, been trying to play around with this issue for the last two weeks, and had encountered more errors along the way but not sure all are relevant so for now I think it is enough info.

    error framework 
    opened by JaCoderX 35
  • Important: BTGym_2.0 announce and discussion

    Important: BTGym_2.0 announce and discussion

    Dear colleagues, As the author and maintainer of BTGym, which has been specially designed to integrate machine learning models with trading strategies, I witness a clear absence of algorithmic trading pipelines featuring proper support of modern artificial intelligence methods. With research field actively expanding, most of the promising results rather remain “proof of concept”, while live trading is still dominated by convenience models and engineered strategies.

    To the best of my knowledge, BTgym is one the very few open source backtest systems providing parallelised event-driven execution and out-of-the-box integrated training framework for modern reinforcement learning algorithms.

    Still, it is evident that conventional parallelised execution of single-threaded backtests poses a major limitations to model training speeds. Aside from that, integrated workflows from experiments to robust live serving of trained decision-making models are yet to be implemented.

    With this in mind, I sketched out a view of a software ecosystem to facilitate diffusion of artificial intelligence methods to algorithmic trading space. After almost two years of developing and maintaining current BTGym project with all ups and downs of a solo open-source GitHub survival, I would like to gather as much expertise as possible before stepping into the next big round.
    With this message I kindly ask you to join the discussion and express your professional judgment, expectations and proposals to shape a clear view of next generation of BTGym. You will find links to related resources below. Your input is valuable and highly appreciated.

    Yours, Andrew Muzikin, BTGym author

    BTGym_2.0 WhitePaper Draft

    Short Note: “Why we need new backtest frameworks…”

    proposal information discussion 
    opened by Kismuz 29
  • When running UNREAL example

    When running UNREAL example

    Hello,

    When I run the UNREAL example I got the following output.

    /home/jsalvado/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) </home/jsalvado/tmp/test_gym_unreal> already exists. Override[y/n]? y WARNING:Launcher:Files in </home/jsalvado/tmp/test_gym_unreal> purged. 2017-11-27 16:52:54.666375: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA E1127 16:52:54.670453114 18319 ev_epoll1_linux.c:1051] grpc epoll fd: 7 2017-11-27 16:52:54.671044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA E1127 16:52:54.671596938 18320 ev_epoll1_linux.c:1051] grpc epoll fd: 8 2017-11-27 16:52:54.676864: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:12230} 2017-11-27 16:52:54.676891: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:12231, 1 -> 127.0.0.1:12232, 2 -> 127.0.0.1:12233} 2017-11-27 16:52:54.677761: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12230} 2017-11-27 16:52:54.677801: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:12231, 1 -> 127.0.0.1:12232, 2 -> 127.0.0.1:12233} 2017-11-27 16:52:54.677844: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12230 2017-11-27 16:52:54.679672: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12231 Press Ctrl-C or [Kernel]->[Interrupt] to stop training and close launcher. 2017-11-27 16:52:59.683070: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA E1127 16:52:59.683829609 18359 ev_epoll1_linux.c:1051] grpc epoll fd: 9 2017-11-27 16:52:59.686654: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA E1127 16:52:59.687214727 18360 ev_epoll1_linux.c:1051] grpc epoll fd: 10 2017-11-27 16:52:59.689904: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12230} 2017-11-27 16:52:59.689941: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:12231, 1 -> localhost:12232, 2 -> 127.0.0.1:12233} 2017-11-27 16:52:59.690832: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12232 2017-11-27 16:52:59.693367: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12230} 2017-11-27 16:52:59.693405: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:12231, 1 -> 127.0.0.1:12232, 2 -> localhost:12233} 2017-11-27 16:52:59.694368: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12233 2017-11-27 16:53:04.660194: I tensorflow/core/distributed_runtime/master_session.cc:1004] Start master session f67f491b8dcaa755 with config: intra_op_parallelism_threads: 1 device_filters: "/job:ps" device_filters: "/job:worker/task:0/cpu:0" inter_op_parallelism_threads: 2 2017-11-27 16:53:09.148756: I tensorflow/core/distributed_runtime/master_session.cc:1004] Start master session dd584888bb6349a4 with config: intra_op_parallelism_threads: 1 device_filters: "/job:ps" device_filters: "/job:worker/task:1/cpu:0" inter_op_parallelism_threads: 2 2017-11-27 16:53:09.294430: I tensorflow/core/distributed_runtime/master_session.cc:1004] Start master session 785d00122230e0e1 with config: intra_op_parallelism_threads: 1 device_filters: "/job:ps" device_filters: "/job:worker/task:2/cpu:0" inter_op_parallelism_threads: 2 WARNING:worker_1:worker_1: started training at step: 0 WARNING:worker_2:worker_2: started training at step: 0 WARNING:worker_0:worker_0: started training at step: 0 WARNING:Env:Data_master reset() called prior to reset_data() with [possibly inconsistent] defaults. WARNING:Env:Dataset not ready, waiting time left: 298 sec. WARNING:Env:Dataset not ready, waiting time left: 298 sec.

    Do you know what can be done to make it work? Thank you very much.

    João Salvado

    question reserach algorithm discussion 
    opened by joaosalvado10 29
  • Features Extraction Discussion

    Features Extraction Discussion

    Following your work on causal convolution and a few issues mentioning the subject, I started to dig in a bit on it and it looks like a very good research direction.

    I have gained my understanding on the topic from those 2 papers: WaveNet Fast WaveNet

    If my intuition is correct, the number of layers (assuming 2^n Dilation) we use have an impact on how far the model 'see' historical data and learn from it. so more layers mean better ability to learn long term patterns. does it make sense?

    looking at your work on '../research/casual_conv/' how do I know how many layers the network use and the dilated parameter? is it possible to change it?

    My goal is to use high resolution data (1-min) but to let the network learn long term features.

    also if you can explain in a nutshell what you are trying to achieve in your work on 'CasualConvStrategy'?

    question algorithm discussion 
    opened by JaCoderX 27
  • Other Timeframes

    Other Timeframes

    I know a current limitation is accept Forex 1 min (only Forex?), but my datasets are with bigger timeframes.

    This is the stacktrace when timeframe is changed:

    Process BTgymServer-2:
    Traceback (most recent call last):
      File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
        self.run()
    
      File "/home/adrian/btgym/btgym/server.py", line 405, in run
        episode = cerebro.run(stdstats=True, preload=False, oldbuysell=True)[0]
    
      File "/usr/local/lib/python3.5/dist-packages/backtrader/cerebro.py", line 1142, in run
        runstrat = self.runstrategies(iterstrat)
    
      File "/usr/local/lib/python3.5/dist-packages/backtrader/cerebro.py", line 1327, in runstrategies
        self.stop_writers(runstrats)
    
      File "/usr/local/lib/python3.5/dist-packages/backtrader/cerebro.py", line 1352, in stop_writers
        datainfos['Data%d' % i] = data.getwriterinfo()
    
      File "/usr/local/lib/python3.5/dist-packages/backtrader/dataseries.py", line 101, in getwriterinfo
        info['Timeframe'] = TimeFrame.TName(self._timeframe)
    
      File "/usr/local/lib/python3.5/dist-packages/backtrader/dataseries.py", line 57, in TName
        return cls.Names[tframe]
    IndexError: list index out of range
    

    Any idea?

    development feature framework information discussion 
    opened by AdrianP- 23
  • error about queue.Empty

    error about queue.Empty

    Thanks for great work :)

    I have an issue while running examples - a3c_random_on_synth_or_real_data ...

    I got several <INFO:tensorflow:Error reported to Coordinator: <class 'queue.Empty'>> messages and then stopped.

    Is there anyway I can fix it ?? Thank you so much. Kim.


    [2018-01-11 20:50:20,439] Error reported to Coordinator: <class 'queue.Empty'>, Process Worker-6: Traceback (most recent call last): File "/home/joowonkim/anaconda3/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/home/joowonkim/바탕화면/git/btgym/btgym/algorithms/worker.py", line 241, in run trainer.process(sess) File "/home/joowonkim/바탕화면/git/btgym/btgym/algorithms/aac.py", line 747, in process data = self.get_data() File "/home/joowonkim/바탕화면/git/btgym/btgym/algorithms/aac.py", line 594, in get_data data_streams = [get_it() for get_it in self.data_getter] File "/home/joowonkim/바탕화면/git/btgym/btgym/algorithms/aac.py", line 594, in data_streams = [get_it() for get_it in self.data_getter] File "/home/joowonkim/바탕화면/git/btgym/btgym/algorithms/rollout.py", line 33, in pull_rollout_from_queue return queue.get(timeout=600.0) File "/home/joowonkim/anaconda3/lib/python3.5/queue.py", line 172, in get raise Empty queue.Empty

    INFO:tensorflow:global/global_step/sec: 0

    [2018-01-11 20:51:38,860] global/global_step/sec: 0

    INFO:tensorflow:Error reported to Coordinator: <class 'queue.Empty'>,

    [2018-01-11 20:51:48,678] Error reported to Coordinator: <class 'queue.Empty'>,


    and stopped

    error framework 
    opened by knn940506 21
  • setting_up_environment_basic fails on python 3.6, Windows 7

    setting_up_environment_basic fails on python 3.6, Windows 7

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-2-914694f81f03> in <module>()
          3 
          4 
    ----> 5 MyEnvironment = BTgymEnv(filename='./data/DAT_ASCII_EURUSD_M1_2016.csv',)
          6 
          7 # Print environment configuration:
    
    c:\users\e\github\btgym-master\btgym\envs\backtrader.py in __init__(self, *args, **kwargs)
        246         # Connect/Start data server (and get dataset statistic):
        247         self.log.info('Connecting data_server...')
    --> 248         self._start_data_server()
        249         self.log.info('...done.')
        250         # ENGINE preparation:
    
    c:\users\e\github\btgym-master\btgym\envs\backtrader.py in _start_data_server(self)
        736             )
        737             self.data_server.daemon = False
    --> 738             self.data_server.start()
        739             # Wait for server to startup
        740             time.sleep(1)
    
    C:\ProgramData\Anaconda3\lib\multiprocessing\process.py in start(self)
        103                'daemonic processes are not allowed to have children'
        104         _cleanup()
    --> 105         self._popen = self._Popen(self)
        106         self._sentinel = self._popen.sentinel
        107         _children.add(self)
    
    C:\ProgramData\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj)
        221     @staticmethod
        222     def _Popen(process_obj):
    --> 223         return _default_context.get_context().Process._Popen(process_obj)
        224 
        225 class DefaultContext(BaseContext):
    
    C:\ProgramData\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj)
        320         def _Popen(process_obj):
        321             from .popen_spawn_win32 import Popen
    --> 322             return Popen(process_obj)
        323 
        324     class SpawnContext(BaseContext):
    
    C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
         63             try:
         64                 reduction.dump(prep_data, to_child)
    ---> 65                 reduction.dump(process_obj, to_child)
         66             finally:
         67                 set_spawning_popen(None)
    
    C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
         58 def dump(obj, file, protocol=None):
         59     '''Replacement for pickle.dump() using ForkingPickler.'''
    ---> 60     ForkingPickler(file, protocol).dump(obj)
         61 
         62 #
    
    TypeError: can't pickle _thread.RLock objects
    
    bug framework unresolved derelict 
    opened by erlenlok 18
  • Making predictions

    Making predictions

    Hi Kizmuz, First of all congratulations for this really great job!

    OK, I have trained my workers, and here you are some questions:

    Do you have some code to test predictions with their model? Do you have some code to extract weights from the checkpoint file and deploy the model? If not, could you provide with some guideline instructions to do that? How do you plan to implement the production test?

    Thank you in advance.

    question framework algorithm discussion 
    opened by parrondo 17
  • signal.pause() - workers exit, but signal never received -- software issue? (debian linux)

    signal.pause() - workers exit, but signal never received -- software issue? (debian linux)

    Can we make an update to README.md that defines what versions of the software are needed?

    • matplotlib==2.0.2
    • tensorflow>=1.5
    • backtrader
    • anything else

    The reason for the issue is that with some configurations the backgrader graphs show after training, and with other configurations the Worker threads exit but things stop and hang there.

    Thanks for any input!

    framework refactoring compatibility 
    opened by signalprime 16
  • Use btgym custom environment

    Use btgym custom environment

    #108

    Referring to https://github.com/Kismuz/btgym/issues/108, I have been trying to use btgym as a standalone custom environment with stable baselines. However, the BTgymEnv cannot pass the environment checking in check_env and I got the following error,

    reference: https://stable-baselines3.readthedocs.io/en/master/guide/custom_env.html

    /home/PycharmProjects/btgym/venv/lib/python3.7/site-packages/stable_baselines3/common/env_checker.py:53: UserWarning: The observation space is a Dict but the environment is not a gym.GoalEnv (cf https://github.com/openai/gym/blob/master/gym/core.py), this is currently not supported by Stable Baselines (cf https://github.com/hill-a/stable-baselines/issues/133), you will need to use a custom policy. 
      "The observation space is a Dict but the environment is not a gym.GoalEnv "
    /home/PycharmProjects/btgym/venv/lib/python3.7/site-packages/stable_baselines3/common/env_checker.py:70: UserWarning: The action space is not based off a numpy array. Typically this means it's either a Dict or Tuple space. This type of action space is currently not supported by Stable Baselines 3. You should try to flatten the action using a wrapper.
      "The action space is not based off a numpy array. Typically this means it's either a Dict or Tuple space. "
    Traceback (most recent call last):
      File "/home/PycharmProjects/btgym/dev/SB3_testing.py", line 160, in <module>
        check_env(env, warn=True)
      File "/home/PycharmProjects/btgym/venv/lib/python3.7/site-packages/stable_baselines3/common/env_checker.py", line 237, in check_env
        _check_returned_values(env, observation_space, action_space)
      File "/home/PycharmProjects/btgym/venv/lib/python3.7/site-packages/stable_baselines3/common/env_checker.py", line 130, in _check_returned_values
        assert isinstance(info, dict), "The `info` returned by `step()` must be a python dictionary"
    AssertionError: The `info` returned by `step()` must be a python dictionary
    

    May I ask that is anyone might have a solution to address this issue?

    Steps to reproduce:
    
    import sys
    sys.path.insert(0, '../../../..')
    
    import IPython.display as Display
    import PIL.Image as Image
    import numpy as np
    import backtrader as bt
    import random
    
    from gym import spaces
    from btgym import BTgymEnv, BTgymBaseStrategy
    from btgym.datafeed.derivative import BTgymDataset2
    
    
    def show_rendered_image(rgb_array):
    
        Display.display(Image.fromarray(rgb_array))
    
    
    def render_all_modes(env):
    
        for mode in env.metadata['render.modes']:
            print('[{}] mode:'.format(mode))
            show_rendered_image(env.render(mode))
    
    
    def take_some_steps(env, some_steps):
    
        for step in range(some_steps):
            rnd_action = env.action_space.sample()
            o, r, d, i = env.step(rnd_action)
            if d:
                print('Episode finished,')
                break
        print(step + 1, 'actions made.\n')
    
    
    def under_the_hood(env):
    
        for attr in ['dataset', 'strategy', 'engine', 'renderer', 'network_address']:
            print('\nEnv.{}: {}'.format(attr, getattr(env, attr)))
    
        for params_name, params_dict in env.params.items():
            print('\nParameters [{}]: '.format(params_name))
            for key, value in params_dict.items():
                print('{} : {}'.format(key, value))
    
    
    class MyStrategy(BTgymBaseStrategy):
    
    
        def get_price_gradients_state(self):
    
            sigmoid = lambda x: 1 / (1 + np.exp(-x))
    
            T = 1.2e+4
    
            X = self.raw_state
    
            dX = np.gradient(X)[0]
    
            return sigmoid(dX * T)
    
        def get_reward(self):
    
            return float(np.log(self.stats.broker.value[0] / self.env.broker.startingcash))
    
    
    MyDataset = BTgymDataset2(
        filename=r'/home/PycharmProjects/btgym/examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
         start_weekdays=[0, 1,],
         episode_duration={'days': 2, 'hours': 23, 'minutes': 55},
    
    )
    
    
    MyCerebro = bt.Cerebro()
    
    MyCerebro.addstrategy(
        MyStrategy,
        state_shape={
            'raw': spaces.Box(low=-10, high=10, shape=(4,4)),
            'price_gradients': spaces.Box(low=0, high=1, shape=(4,4))
        },
        drawdown_call=99,
        skip_frame=5,
    )
    
    MyCerebro.broker.setcash(1000.0)
    MyCerebro.broker.setcommission(commission=0.002)
    MyCerebro.addsizer(bt.sizers.SizerFix, stake=20)
    MyCerebro.addanalyzer(bt.analyzers.DrawDown)
    
    env = BTgymEnv(
        dataset=MyDataset,
        episode_duration={'days': 0, 'hours': 5, 'minutes': 55}, # ignored!
        engine=MyCerebro,
        strategy='NotUsed',  # ignored!
        state_shape=(9, 99), # ignored!
        start_cash=1.0,  # ignored!
        render_modes=['episode', 'human', 'price_gradients'],
        render_state_as_image=True,
        render_ylabel='Price Gradient',
        render_size_human=(10,4),
        render_size_state=(10,4),
        render_plotstyle='ggplot',
        verbose=0,
    )
    
    under_the_hood(env)
    env.reset()
    take_some_steps(env, 100)
    render_all_modes(env)
    
    print('-------------------------checking spec-------------------------------')
    print("Observation space:", env.observation_space)
    print("Shape:", env.observation_space.shape)
    print("Action space:", env.action_space)
    obs = env.reset()
    action = env.action_space.sample()
    obs, reward, done, info = env.step(action)
    print(reward, done, info)
    
    ################################################## If comment out the following 2 lines, the code would be fine.
    from stable_baselines3.common.env_checker import check_env
    check_env(env, warn=True)
    
    env.close()
    
    
    
    opened by okvlam 0
  • Migration to Tensorflow2

    Migration to Tensorflow2

    Automatic and manual changes for migration of Tensorflow 1 to 2.

    Because there are no tests I checked some notebooks from examples:

    • very basic environment setup- works
    • setting up environment basic- works without the last cell (problem of requesting data from data server)
    • setting up environment full- works
    • guided ac3 - runtime error

    I would need some help to finish this PR.

    opened by woj-i 11
  • Support Tensorflow 2

    Support Tensorflow 2

    Expected behaviour:

    Environment fully supports TF2

    Actual behaviour:

    Only 1.5 is supported for certain scripts

    I ran the automatic upgrade tool on the library and it generated the following report. For the most part it moved everything along, but there are several errors in the script for fully deprecated classes that we'll need to reference another way.

    TensorFlow 2.0 Upgrade Script
    -----------------------------
    Converted 109 files
    Detected 34 issues that require attention
    --------------------------------------------------------------------------------
    --------------------------------------------------------------------------------
    File: btgym/research/b_vae_a3c.py
    --------------------------------------------------------------------------------
    btgym/research/b_vae_a3c.py:54:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    btgym/research/b_vae_a3c.py:54:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    btgym/research/b_vae_a3c.py:54:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    --------------------------------------------------------------------------------
    File: btgym/research/gps/policy.py
    --------------------------------------------------------------------------------
    btgym/research/gps/policy.py:19:27: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    btgym/research/gps/policy.py:19:27: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    btgym/research/gps/policy.py:19:27: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    --------------------------------------------------------------------------------
    File: btgym/research/casual_conv/networks.py
    --------------------------------------------------------------------------------
    btgym/research/casual_conv/networks.py:140:42: WARNING: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib.seq2seq. (Manual edit required) tf.contrib.seq2seq.* have been migrated to `tfa.seq2seq.*` in TensorFlow Addons. Please see https://github.com/tensorflow/addons for more info.
    btgym/research/casual_conv/networks.py:140:42: ERROR: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib. tf.contrib.seq2seq.LuongAttention cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    btgym/research/casual_conv/networks.py:204:30: WARNING: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib.seq2seq. (Manual edit required) tf.contrib.seq2seq.* have been migrated to `tfa.seq2seq.*` in TensorFlow Addons. Please see https://github.com/tensorflow/addons for more info.
    btgym/research/casual_conv/networks.py:204:30: ERROR: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib. tf.contrib.seq2seq.LuongAttention cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    --------------------------------------------------------------------------------
    File: btgym/research/encoder_test/policy.py
    --------------------------------------------------------------------------------
    btgym/research/encoder_test/policy.py:20:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    btgym/research/encoder_test/policy.py:20:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    btgym/research/encoder_test/policy.py:20:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    --------------------------------------------------------------------------------
    File: btgym/algorithms/aac.py
    --------------------------------------------------------------------------------
    btgym/algorithms/aac.py:801:27: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/aac.py:814:30: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    --------------------------------------------------------------------------------
    File: btgym/algorithms/worker.py
    --------------------------------------------------------------------------------
    btgym/algorithms/worker.py:38:8: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
    btgym/algorithms/worker.py:178:8: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
    --------------------------------------------------------------------------------
    File: btgym/algorithms/nn/layers.py
    --------------------------------------------------------------------------------
    btgym/algorithms/nn/layers.py:44:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:45:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:74:15: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:75:18: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:80:19: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:81:22: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:97:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:99:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:129:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:131:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:153:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:155:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:173:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    btgym/algorithms/nn/layers.py:175:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    --------------------------------------------------------------------------------
    File: btgym/algorithms/policy/stacked_lstm.py
    --------------------------------------------------------------------------------
    btgym/algorithms/policy/stacked_lstm.py:30:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    btgym/algorithms/policy/stacked_lstm.py:30:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    btgym/algorithms/policy/stacked_lstm.py:30:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    ================================================================================
    Detailed log follows:
    
    ================================================================================
    ================================================================================
    Input tree: 'btgym'
    ================================================================================
    --------------------------------------------------------------------------------
    Processing file 'btgym/spaces.py'
     outputting to 'btgym2/spaces.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/__init__.py'
     outputting to 'btgym2/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/dataserver.py'
     outputting to 'btgym2/dataserver.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/server.py'
     outputting to 'btgym2/server.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/monitor/__init__.py'
     outputting to 'btgym2/monitor/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/monitor/tensorboard2.py'
     outputting to 'btgym2/monitor/tensorboard2.py'
    --------------------------------------------------------------------------------
    
    72:22: INFO: tf.summary.FileWriter requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    72:22: INFO: Renamed 'tf.summary.FileWriter' to 'tf.compat.v1.summary.FileWriter'
    72:63: INFO: Renamed 'tf.get_default_graph' to 'tf.compat.v1.get_default_graph'
    80:38: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    81:26: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    81:26: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    85:38: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    86:26: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    86:26: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    90:38: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    91:26: INFO: tf.summary.histogram requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    91:26: INFO: Renamed 'tf.summary.histogram' to 'tf.compat.v1.summary.histogram'
    95:38: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    96:26: INFO: tf.summary.histogram requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    96:26: INFO: Renamed 'tf.summary.histogram' to 'tf.compat.v1.summary.histogram'
    98:23: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    98:23: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/monitor/tensorboard.py'
     outputting to 'btgym2/monitor/tensorboard.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_2.py'
     outputting to 'btgym2/research/strategy_gen_2.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/__init__.py'
     outputting to 'btgym2/research/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_4.py'
     outputting to 'btgym2/research/strategy_gen_4.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/b_vae_a3c.py'
     outputting to 'btgym2/research/b_vae_a3c.py'
    --------------------------------------------------------------------------------
    
    54:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    54:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    54:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    110:25: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    111:26: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    114:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    115:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    117:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    118:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    136:26: INFO: Added keywords to args of function 'tf.shape'
    161:51: INFO: Added keywords to args of function 'tf.shape'
    170:47: INFO: Added keywords to args of function 'tf.shape'
    180:44: INFO: Added keywords to args of function 'tf.shape'
    191:46: INFO: Added keywords to args of function 'tf.shape'
    197:40: INFO: Added keywords to args of function 'tf.shape'
    198:49: INFO: Added keywords to args of function 'tf.shape'
    199:56: INFO: Added keywords to args of function 'tf.shape'
    202:35: INFO: Added keywords to args of function 'tf.transpose'
    215:49: INFO: Added keywords to args of function 'tf.shape'
    234:42: INFO: Added keywords to args of function 'tf.shape'
    239:40: INFO: Added keywords to args of function 'tf.shape'
    240:49: INFO: Added keywords to args of function 'tf.shape'
    241:56: INFO: Added keywords to args of function 'tf.shape'
    247:47: INFO: Added keywords to args of function 'tf.shape'
    277:26: INFO: Added keywords to args of function 'tf.shape'
    400:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    424:31: INFO: Renamed 'tf.placeholder_with_default' to 'tf.compat.v1.placeholder_with_default'
    429:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    429:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    431:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    431:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    431:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    432:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    432:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    432:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    435:24: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    435:42: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    435:76: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    468:19: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    506:21: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    534:24: INFO: Added keywords to args of function 'tf.gradients'
    542:44: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    542:44: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/policy_rl2.py'
     outputting to 'btgym2/research/policy_rl2.py'
    --------------------------------------------------------------------------------
    
    44:23: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/misc_utils.py'
     outputting to 'btgym2/research/misc_utils.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_7/__init__.py'
     outputting to 'btgym2/research/strategy_gen_7/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_7/base.py'
     outputting to 'btgym2/research/strategy_gen_7/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/strategy.py'
     outputting to 'btgym2/research/gps/strategy.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/aac.py'
     outputting to 'btgym2/research/gps/aac.py'
    --------------------------------------------------------------------------------
    
    96:41: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    96:41: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/oracle.py'
     outputting to 'btgym2/research/gps/oracle.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/__init__.py'
     outputting to 'btgym2/research/gps/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/loss.py'
     outputting to 'btgym2/research/gps/loss.py'
    --------------------------------------------------------------------------------
    
    17:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    17:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    20:26: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    22:19: INFO: Added keywords to args of function 'tf.argmax'
    24:15: INFO: Added keywords to args of function 'tf.reduce_mean'
    27:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    27:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    47:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    47:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    51:26: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    55:15: INFO: Added keywords to args of function 'tf.reduce_mean'
    58:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    58:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    78:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    78:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    85:15: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    85:15: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    91:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    91:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/gps/policy.py'
     outputting to 'btgym2/research/gps/policy.py'
    --------------------------------------------------------------------------------
    
    19:27: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    19:27: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    19:27: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/strategy.py'
     outputting to 'btgym2/research/model_based/strategy.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/aac.py'
     outputting to 'btgym2/research/model_based/aac.py'
    --------------------------------------------------------------------------------
    
    47:21: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    47:21: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    49:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    49:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    50:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    50:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    52:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    52:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    55:44: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    55:44: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    55:81: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    59:24: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    59:24: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    64:25: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    65:20: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    66:21: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    67:24: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    68:18: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    69:22: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    70:21: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    71:18: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    76:22: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    80:34: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    80:34: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    81:13: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    81:13: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    85:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    85:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    87:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    87:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    88:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    88:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    89:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    89:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    90:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    90:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    91:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    91:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    92:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    92:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    97:43: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    97:43: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    99:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    99:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    100:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    100:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    104:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    104:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    106:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    106:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    107:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    107:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    283:12: INFO: Added keywords to args of function 'tf.gradients'
    288:12: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    292:12: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    320:36: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/runner.py'
     outputting to 'btgym2/research/model_based/runner.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/__init__.py'
     outputting to 'btgym2/research/model_based/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/utils.py'
     outputting to 'btgym2/research/model_based/model/utils.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/rec.py'
     outputting to 'btgym2/research/model_based/model/rec.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/univariate.py'
     outputting to 'btgym2/research/model_based/model/univariate.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/bivariate.py'
     outputting to 'btgym2/research/model_based/model/bivariate.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/__init__.py'
     outputting to 'btgym2/research/model_based/model/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/model/stochastic.py'
     outputting to 'btgym2/research/model_based/model/stochastic.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/datafeed/ou.py'
     outputting to 'btgym2/research/model_based/datafeed/ou.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/datafeed/bivariate.py'
     outputting to 'btgym2/research/model_based/datafeed/bivariate.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/datafeed/__init__.py'
     outputting to 'btgym2/research/model_based/datafeed/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/model_based/datafeed/base.py'
     outputting to 'btgym2/research/model_based/datafeed/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual_conv/strategy.py'
     outputting to 'btgym2/research/casual_conv/strategy.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual_conv/networks.py'
     outputting to 'btgym2/research/casual_conv/networks.py'
    --------------------------------------------------------------------------------
    
    35:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    56:20: INFO: Added keywords to args of function 'tf.pad'
    86:20: INFO: Changing keep_prob arg of tf.nn.dropout to rate
    
    140:42: WARNING: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib.seq2seq. (Manual edit required) tf.contrib.seq2seq.* have been migrated to `tfa.seq2seq.*` in TensorFlow Addons. Please see https://github.com/tensorflow/addons for more info.
    140:42: ERROR: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib. tf.contrib.seq2seq.LuongAttention cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    174:47: INFO: Added keywords to args of function 'tf.shape'
    204:30: WARNING: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib.seq2seq. (Manual edit required) tf.contrib.seq2seq.* have been migrated to `tfa.seq2seq.*` in TensorFlow Addons. Please see https://github.com/tensorflow/addons for more info.
    204:30: ERROR: Using member tf.contrib.seq2seq.LuongAttention in deprecated module tf.contrib. tf.contrib.seq2seq.LuongAttention cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    222:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    243:20: INFO: Added keywords to args of function 'tf.pad'
    271:20: INFO: Changing keep_prob arg of tf.nn.dropout to rate
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual_conv/__init__.py'
     outputting to 'btgym2/research/casual_conv/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual_conv/policy.py'
     outputting to 'btgym2/research/casual_conv/policy.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual_conv/layers.py'
     outputting to 'btgym2/research/casual_conv/layers.py'
    --------------------------------------------------------------------------------
    
    7:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    7:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    8:16: INFO: Added keywords to args of function 'tf.shape'
    10:17: INFO: Added keywords to args of function 'tf.pad'
    12:21: INFO: Added keywords to args of function 'tf.transpose'
    17:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    17:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    18:16: INFO: Added keywords to args of function 'tf.shape'
    20:21: INFO: Added keywords to args of function 'tf.transpose'
    22:27: INFO: Renamed 'tf.div' to 'tf.compat.v1.div'
    35:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    35:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    59:20: INFO: Added keywords to args of function 'tf.shape'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/_env_runner.py'
     outputting to 'btgym2/research/metalearn_2/_env_runner.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/_mldg_batch.py'
     outputting to 'btgym2/research/metalearn_2/_mldg_batch.py'
    --------------------------------------------------------------------------------
    
    ERROR: Failed to parse.
    Traceback (most recent call last):
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/tensorflow/tools/compatibility/ast_edits.py", line 940, in update_string_pasta
        t = pasta.parse(text)
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/pasta/__init__.py", line 23, in parse
        t = ast_utils.parse(src)
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/pasta/base/ast_utils.py", line 56, in parse
        tree = ast.parse(sanitize_source(src))
      File "/usr/lib/python3.8/ast.py", line 47, in parse
        return compile(source, filename, mode, flags,
      File "<unknown>", line 402
        test_feed_dict = self.test_aac.process_data(sess,,,,, test_data,,
                                                         ^
    SyntaxError: invalid syntax
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/__init__.py'
     outputting to 'btgym2/research/metalearn_2/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/loss.py'
     outputting to 'btgym2/research/metalearn_2/loss.py'
    --------------------------------------------------------------------------------
    
    22:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    22:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    23:32: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    27:31: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    31:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    34:30: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    34:30: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    35:29: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    35:29: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    37:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    41:23: INFO: Added keywords to args of function 'tf.reduce_mean'
    42:24: INFO: Added keywords to args of function 'tf.reduce_mean'
    45:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    45:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    46:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    46:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    50:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    50:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    51:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    51:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    52:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    52:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/_fwrnn_aac.py'
     outputting to 'btgym2/research/metalearn_2/_fwrnn_aac.py'
    --------------------------------------------------------------------------------
    
    ERROR: Failed to parse.
    Traceback (most recent call last):
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/tensorflow/tools/compatibility/ast_edits.py", line 940, in update_string_pasta
        t = pasta.parse(text)
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/pasta/__init__.py", line 23, in parse
        t = ast_utils.parse(src)
      File "/home/matthew/Documents/btgym/env/lib/python3.8/site-packages/pasta/base/ast_utils.py", line 56, in parse
        tree = ast.parse(sanitize_source(src))
      File "/usr/lib/python3.8/ast.py", line 47, in parse
        return compile(source, filename, mode, flags,
      File "<unknown>", line 154
        feed_dict = self.process_data(sess,,,,, train_data,,
                                           ^
    SyntaxError: invalid syntax
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/metalearn_2/_aac_t2d.py'
     outputting to 'btgym2/research/metalearn_2/_aac_t2d.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/encoder_test/networks.py'
     outputting to 'btgym2/research/encoder_test/networks.py'
    --------------------------------------------------------------------------------
    
    36:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    55:20: INFO: Changing keep_prob arg of tf.nn.dropout to rate
    
    75:27: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    88:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    89:12: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/encoder_test/aac.py'
     outputting to 'btgym2/research/encoder_test/aac.py'
    --------------------------------------------------------------------------------
    
    556:13: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    556:13: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    558:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    561:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    562:32: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    579:27: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    579:27: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    585:23: INFO: tf.metrics.mean_squared_error requires manual check. tf.metrics have been replaced with object oriented versions in TF 2.0 and after. The metric function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    585:23: INFO: Renamed 'tf.metrics.mean_squared_error' to 'tf.compat.v1.metrics.mean_squared_error'
    591:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    591:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    592:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    592:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    618:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    622:12: INFO: Added keywords to args of function 'tf.gradients'
    625:33: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    650:52: INFO: Added keywords to args of function 'tf.shape'
    668:21: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    668:21: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    670:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    670:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    671:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    671:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    673:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    673:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    676:44: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    676:44: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    676:81: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    680:24: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    680:24: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    685:25: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    686:20: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    687:21: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    688:24: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    689:18: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    693:38: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    693:38: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    699:26: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    703:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    703:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    704:17: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    704:17: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    708:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    708:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    710:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    710:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    711:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    711:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    716:43: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    716:43: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    718:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    718:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    722:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    722:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    724:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    724:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    725:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    725:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/encoder_test/runner.py'
     outputting to 'btgym2/research/encoder_test/runner.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/encoder_test/__init__.py'
     outputting to 'btgym2/research/encoder_test/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/encoder_test/policy.py'
     outputting to 'btgym2/research/encoder_test/policy.py'
    --------------------------------------------------------------------------------
    
    20:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    20:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    20:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    75:40: INFO: Renamed 'tf.AUTO_REUSE' to 'tf.compat.v1.AUTO_REUSE'
    91:28: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    96:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    99:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    100:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    114:31: INFO: Renamed 'tf.placeholder_with_default' to 'tf.compat.v1.placeholder_with_default'
    149:30: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    168:35: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    187:12: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    194:26: INFO: Added keywords to args of function 'tf.shape'
    283:35: INFO: Added keywords to args of function 'tf.transpose'
    366:18: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    382:18: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    397:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    397:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    399:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    399:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    399:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    400:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    400:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    400:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    403:24: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    403:42: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    403:76: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    437:19: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    476:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_6/utils.py'
     outputting to 'btgym2/research/strategy_gen_6/utils.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_6/__init__.py'
     outputting to 'btgym2/research/strategy_gen_6/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_6/base.py'
     outputting to 'btgym2/research/strategy_gen_6/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual/aac.py'
     outputting to 'btgym2/research/casual/aac.py'
    --------------------------------------------------------------------------------
    
    282:41: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    282:41: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    330:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    334:12: INFO: Added keywords to args of function 'tf.gradients'
    337:33: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    354:52: INFO: Added keywords to args of function 'tf.shape'
    359:24: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    363:47: INFO: Renamed 'tf.assign' to 'tf.compat.v1.assign'
    368:47: INFO: Renamed 'tf.scatter_nd_update' to 'tf.compat.v1.scatter_nd_update'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/casual/__init__.py'
     outputting to 'btgym2/research/casual/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/aac.py'
     outputting to 'btgym2/research/mldg/aac.py'
    --------------------------------------------------------------------------------
    
    238:45: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    238:45: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    282:12: INFO: Added keywords to args of function 'tf.gradients'
    286:12: INFO: Added keywords to args of function 'tf.gradients'
    323:12: INFO: Added keywords to args of function 'tf.shape'
    330:24: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    334:34: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    334:34: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    344:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    356:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    356:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    356:55: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    357:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    357:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    851:12: INFO: Added keywords to args of function 'tf.gradients'
    855:12: INFO: Added keywords to args of function 'tf.gradients'
    892:12: INFO: Added keywords to args of function 'tf.shape'
    895:36: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    896:35: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    902:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/aac_1.py'
     outputting to 'btgym2/research/mldg/aac_1.py'
    --------------------------------------------------------------------------------
    
    76:36: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    76:36: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    101:41: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    101:41: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    155:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    156:30: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    160:12: INFO: Added keywords to args of function 'tf.gradients'
    164:12: INFO: Added keywords to args of function 'tf.gradients'
    192:52: INFO: Added keywords to args of function 'tf.shape'
    207:13: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    207:13: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    209:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    209:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    209:59: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/__init__.py'
     outputting to 'btgym2/research/mldg/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/aac_1d.py'
     outputting to 'btgym2/research/mldg/aac_1d.py'
    --------------------------------------------------------------------------------
    
    100:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    101:30: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    105:12: INFO: Added keywords to args of function 'tf.gradients'
    109:12: INFO: Added keywords to args of function 'tf.gradients'
    137:52: INFO: Added keywords to args of function 'tf.shape'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/policy.py'
     outputting to 'btgym2/research/mldg/policy.py'
    --------------------------------------------------------------------------------
    
    49:19: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/memory.py'
     outputting to 'btgym2/research/mldg/memory.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/mldg/aac_1s.py'
     outputting to 'btgym2/research/mldg/aac_1s.py'
    --------------------------------------------------------------------------------
    
    226:13: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    226:13: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    230:45: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    230:45: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    252:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    255:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    256:32: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    276:35: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    278:35: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    279:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    299:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    326:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    327:30: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    331:12: INFO: Added keywords to args of function 'tf.gradients'
    335:12: INFO: Added keywords to args of function 'tf.gradients'
    357:52: INFO: Added keywords to args of function 'tf.shape'
    504:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    508:12: INFO: Added keywords to args of function 'tf.gradients'
    512:12: INFO: Added keywords to args of function 'tf.gradients'
    519:30: INFO: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer'
    542:52: INFO: Added keywords to args of function 'tf.shape'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_5/__init__.py'
     outputting to 'btgym2/research/strategy_gen_5/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/research/strategy_gen_5/base.py'
     outputting to 'btgym2/research/strategy_gen_5/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/strategy/utils.py'
     outputting to 'btgym2/strategy/utils.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/strategy/observers.py'
     outputting to 'btgym2/strategy/observers.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/strategy/__init__.py'
     outputting to 'btgym2/strategy/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/strategy/base.py'
     outputting to 'btgym2/strategy/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/rollout.py'
     outputting to 'btgym2/algorithms/rollout.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/utils.py'
     outputting to 'btgym2/algorithms/utils.py'
    --------------------------------------------------------------------------------
    
    22:25: INFO: Renamed 'tf.contrib.rnn.LSTMStateTuple' to 'tf.nn.rnn_cell.LSTMStateTuple'
    24:12: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    25:12: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    26:15: INFO: Renamed 'tf.contrib.rnn.LSTMStateTuple' to 'tf.nn.rnn_cell.LSTMStateTuple'
    29:12: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    52:14: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/aac.py'
     outputting to 'btgym2/algorithms/aac.py'
    --------------------------------------------------------------------------------
    
    201:16: INFO: Renamed 'tf.set_random_seed' to 'tf.compat.v1.set_random_seed'
    408:31: INFO: Renamed 'tf.train.replica_device_setter' to 'tf.compat.v1.train.replica_device_setter'
    420:21: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    432:36: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    432:54: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    443:46: INFO: tf.train.polynomial_decay requires manual check. To use learning rate decay schedules with TensorFlow 2.0, switch to the schedules in `tf.keras.optimizers.schedules`.
    
    443:46: INFO: Renamed 'tf.train.polynomial_decay' to 'tf.compat.v1.train.polynomial_decay'
    510:13: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    510:13: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    512:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    515:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    516:32: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    537:35: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    539:35: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    540:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    561:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    562:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    577:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    589:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    616:25: INFO: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'
    627:12: INFO: Added keywords to args of function 'tf.gradients'
    630:33: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    652:52: INFO: Added keywords to args of function 'tf.shape'
    669:21: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    669:21: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    671:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    671:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    674:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    674:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    675:24: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    675:24: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    680:44: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    680:44: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    680:81: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    684:24: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    684:24: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    689:25: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    690:20: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    691:21: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    692:24: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    693:18: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    697:38: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    697:38: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    703:26: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    707:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    707:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    708:17: INFO: tf.summary.image requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    708:17: INFO: Renamed 'tf.summary.image' to 'tf.compat.v1.summary.image'
    712:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    712:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    714:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    714:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    715:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    715:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    716:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    716:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    717:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    717:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    722:43: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    722:43: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    724:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    724:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    725:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    725:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    726:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    726:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    730:38: INFO: tf.summary.merge requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    730:38: INFO: Renamed 'tf.summary.merge' to 'tf.compat.v1.summary.merge'
    732:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    732:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    733:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    733:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    801:27: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    801:27: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    805:24: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    805:24: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    811:8: INFO: Renamed 'tf.add_to_collection' to 'tf.compat.v1.add_to_collection'
    811:29: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    814:30: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    814:30: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    818:24: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    818:24: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    840:13: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    1316:44: INFO: Renamed 'tf.Summary' to 'tf.compat.v1.Summary'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/test.py'
     outputting to 'btgym2/algorithms/test.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/__init__.py'
     outputting to 'btgym2/algorithms/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/memory.py'
     outputting to 'btgym2/algorithms/memory.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/envs.py'
     outputting to 'btgym2/algorithms/envs.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/worker.py'
     outputting to 'btgym2/algorithms/worker.py'
    --------------------------------------------------------------------------------
    
    19:0: INFO: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'
    19:25: INFO: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'
    22:16: INFO: Renamed 'tf.train.Saver' to 'tf.compat.v1.train.Saver'
    38:8: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
    178:8: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
    191:12: INFO: Renamed 'tf.reset_default_graph' to 'tf.compat.v1.reset_default_graph'
    201:25: INFO: Renamed 'tf.train.Server' to 'tf.distribute.Server'
    205:27: INFO: Renamed 'tf.ConfigProto' to 'tf.compat.v1.ConfigProto'
    212:25: INFO: Renamed 'tf.train.Server' to 'tf.distribute.Server'
    216:27: INFO: Renamed 'tf.ConfigProto' to 'tf.compat.v1.ConfigProto'
    311:48: INFO: Renamed 'tf.global_variables' to 'tf.compat.v1.global_variables'
    312:46: INFO: Renamed 'tf.global_variables' to 'tf.compat.v1.global_variables'
    312:92: INFO: Renamed 'tf.local_variables' to 'tf.compat.v1.local_variables'
    313:26: INFO: Renamed 'tf.initializers.variables' to 'tf.compat.v1.initializers.variables'
    314:32: INFO: Renamed 'tf.initializers.variables' to 'tf.compat.v1.initializers.variables'
    315:30: INFO: Renamed 'tf.global_variables_initializer' to 'tf.compat.v1.global_variables_initializer'
    335:30: INFO: Renamed 'tf.ConfigProto' to 'tf.compat.v1.ConfigProto'
    337:31: INFO: Renamed 'tf.train.SessionManager' to 'tf.compat.v1.train.SessionManager'
    340:44: INFO: Renamed 'tf.report_uninitialized_variables' to 'tf.compat.v1.report_uninitialized_variables'
    367:42: INFO: tf.summary.FileWriter requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    367:42: INFO: Renamed 'tf.summary.FileWriter' to 'tf.compat.v1.summary.FileWriter'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/math_utils.py'
     outputting to 'btgym2/algorithms/math_utils.py'
    --------------------------------------------------------------------------------
    
    39:18: INFO: Added keywords to args of function 'tf.reduce_max'
    41:9: INFO: Added keywords to args of function 'tf.reduce_sum'
    43:11: INFO: Added keywords to args of function 'tf.reduce_sum'
    43:31: INFO: Renamed 'tf.log' to 'tf.math.log'
    47:20: INFO: Added keywords to args of function 'tf.reduce_max'
    48:20: INFO: Added keywords to args of function 'tf.reduce_max'
    51:9: INFO: Added keywords to args of function 'tf.reduce_sum'
    52:9: INFO: Added keywords to args of function 'tf.reduce_sum'
    54:11: INFO: Added keywords to args of function 'tf.reduce_sum'
    54:36: INFO: Renamed 'tf.log' to 'tf.math.log'
    54:54: INFO: Renamed 'tf.log' to 'tf.math.log'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/launcher/__init__.py'
     outputting to 'btgym2/algorithms/launcher/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/launcher/base.py'
     outputting to 'btgym2/algorithms/launcher/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/launcher/meta.py'
     outputting to 'btgym2/algorithms/launcher/meta.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/runner/synchro.py'
     outputting to 'btgym2/algorithms/runner/synchro.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/runner/__init__.py'
     outputting to 'btgym2/algorithms/runner/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/runner/base.py'
     outputting to 'btgym2/algorithms/runner/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/runner/threadrunner.py'
     outputting to 'btgym2/algorithms/runner/threadrunner.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/nn/networks.py'
     outputting to 'btgym2/algorithms/nn/networks.py'
    --------------------------------------------------------------------------------
    
    39:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    58:20: INFO: Changing keep_prob arg of tf.nn.dropout to rate
    
    126:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    129:22: INFO: Renamed 'tf.nn.static_rnn' to 'tf.compat.v1.nn.static_rnn'
    134:22: INFO: Renamed 'tf.nn.dynamic_rnn' to 'tf.compat.v1.nn.dynamic_rnn'
    140:24: INFO: Renamed 'tf.nn.rnn_cell.DropoutWrapper' to 'tf.compat.v1.nn.rnn_cell.DropoutWrapper'
    146:35: INFO: Added keywords to args of function 'tf.shape'
    186:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    250:18: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    251:23: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    261:8: INFO: Added keywords to args of function 'tf.reduce_mean'
    263:12: INFO: Added keywords to args of function 'tf.nn.max_pool'
    263:12: INFO: Renamed keyword argument for tf.nn.max_pool from value to input
    263:12: INFO: Renamed 'tf.nn.max_pool' to 'tf.nn.max_pool2d'
    288:24: INFO: Changing tf.contrib.layers xavier initializer to a tf.compat.v1.keras.initializers.VarianceScaling and converting arguments.
    
    299:16: INFO: Added keywords to args of function 'tf.reduce_mean'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/nn/losses.py'
     outputting to 'btgym2/algorithms/nn/losses.py'
    --------------------------------------------------------------------------------
    
    28:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    28:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    29:26: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    33:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    34:24: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    34:24: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    35:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    39:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    40:24: INFO: Added keywords to args of function 'tf.reduce_mean'
    43:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    43:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    44:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    44:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    48:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    48:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    49:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    49:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    82:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    82:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    83:24: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    88:14: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    98:25: INFO: Added keywords to args of function 'tf.reduce_mean'
    99:18: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    99:18: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    100:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    105:24: INFO: Added keywords to args of function 'tf.reduce_mean'
    106:18: INFO: Added keywords to args of function 'tf.reduce_mean'
    107:26: INFO: Added keywords to args of function 'tf.reduce_mean'
    110:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    110:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    111:12: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    111:12: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    115:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    115:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    116:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    116:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    117:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    117:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    118:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    118:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    139:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    139:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    140:15: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    140:15: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    143:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    143:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    175:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    175:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    177:60: INFO: Added keywords to args of function 'tf.shape'
    179:22: INFO: Added keywords to args of function 'tf.reduce_sum'
    181:21: INFO: Added keywords to args of function 'tf.shape'
    182:15: INFO: Added keywords to args of function 'tf.reduce_sum'
    185:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    185:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    216:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    216:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    217:15: INFO: Renamed 'tf.nn.softmax_cross_entropy_with_logits_v2' to 'tf.nn.softmax_cross_entropy_with_logits'
    222:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    222:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    244:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    244:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    245:15: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    245:15: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    248:25: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    248:25: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    277:9: INFO: `name` passed to `name_scope`. Because you may be re-entering an existing scope, it is not safe to convert automatically,  the v2 name_scope does not support re-entering scopes by name.
    
    277:9: INFO: Renamed 'tf.name_scope' to 'tf.compat.v1.name_scope'
    278:17: INFO: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.
    278:17: INFO: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'
    279:19: INFO: Added keywords to args of function 'tf.reduce_mean'
    283:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    283:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    284:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    284:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/nn/__init__.py'
     outputting to 'btgym2/algorithms/nn/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/nn/ae.py'
     outputting to 'btgym2/algorithms/nn/ae.py'
    --------------------------------------------------------------------------------
    
    34:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    86:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    94:16: INFO: Renamed 'tf.image.resize_images' to 'tf.image.resize'
    159:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    226:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    316:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    361:14: INFO: Renamed 'tf.random_normal' to 'tf.random.normal'
    362:19: INFO: Added keywords to args of function 'tf.shape'
    408:19: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    413:24: INFO: Added keywords to args of function 'tf.reduce_mean'
    416:25: INFO: Added keywords to args of function 'tf.gradients'
    419:35: INFO: Added keywords to args of function 'tf.reduce_mean'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/nn/layers.py'
     outputting to 'btgym2/algorithms/nn/layers.py'
    --------------------------------------------------------------------------------
    
    34:23: INFO: Added keywords to args of function 'tf.multinomial'
    34:23: INFO: Renamed 'tf.multinomial' to 'tf.random.categorical'
    43:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    44:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    44:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    45:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    45:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    45:54: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    45:54: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    59:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    64:18: INFO: tf.random_uniform_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    64:18: INFO: Renamed 'tf.random_uniform_initializer' to 'tf.compat.v1.random_uniform_initializer'
    66:21: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    66:21: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    68:12: INFO: Renamed 'tf.random_normal' to 'tf.random.normal'
    69:12: INFO: Renamed 'tf.random_normal' to 'tf.random.normal'
    74:15: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    74:15: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    75:18: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    75:18: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    80:19: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    80:19: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    81:22: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    81:22: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    93:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    97:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    97:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    97:66: INFO: Changing tf.contrib.layers xavier initializer to a tf.compat.v1.keras.initializers.VarianceScaling and converting arguments.
    
    99:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    99:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    99:69: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    99:69: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    101:15: INFO: Added keywords to args of function 'tf.nn.conv2d'
    101:15: INFO: Renamed keyword argument for tf.nn.conv2d from filter to filters
    110:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    113:21: INFO: Added keywords to args of function 'tf.shape'
    129:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    129:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    129:68: INFO: Changing tf.contrib.layers xavier initializer to a tf.compat.v1.keras.initializers.VarianceScaling and converting arguments.
    
    131:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    131:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    131:75: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    131:75: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    144:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    153:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    153:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    153:66: INFO: Changing tf.contrib.layers xavier initializer to a tf.compat.v1.keras.initializers.VarianceScaling and converting arguments.
    
    155:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    155:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    155:66: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    155:66: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    157:15: INFO: Added keywords to args of function 'tf.nn.conv1d'
    157:15: INFO: Renamed keyword argument for tf.nn.conv1d from value to input
    165:9: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    173:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    173:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    174:28: INFO: Changing tf.contrib.layers xavier initializer to a tf.compat.v1.keras.initializers.VarianceScaling and converting arguments.
    
    175:12: WARNING: tf.get_variable requires manual check. tf.get_variable returns ResourceVariables by default in 2.0, which have well-defined semantics and are stricter about shapes. You can disable this behavior by passing use_resource=False, or by calling tf.compat.v1.disable_resource_variables().
    175:12: INFO: Renamed 'tf.get_variable' to 'tf.compat.v1.get_variable'
    176:40: INFO: tf.constant_initializer requires manual check. Initializers no longer have the dtype argument in the constructor or partition_info argument in the __call__ method.
    The calls have been converted to compat.v1 for safety (even though they may already have been correct).
    176:40: INFO: Renamed 'tf.constant_initializer' to 'tf.compat.v1.constant_initializer'
    177:15: INFO: Added keywords to args of function 'tf.nn.depthwise_conv2d'
    177:15: INFO: Renamed keyword argument for tf.nn.depthwise_conv2d from rate to dilations
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/policy/stacked_lstm.py'
     outputting to 'btgym2/algorithms/policy/stacked_lstm.py'
    --------------------------------------------------------------------------------
    
    30:32: WARNING: tf.contrib.rnn.LayerNormBasicLSTMCell requires manual check. (Manual edit required) `tf.contrib.rnn.LayerNormBasicLSTMCell` has been migrated to `tfa.rnn.LayerNormLSTMCell` in TensorFlow Addons. The API spec may have changed during the migration. Please see https://github.com/tensorflow/addons for more info.
    30:32: WARNING: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
    30:32: ERROR: Using member tf.contrib.rnn.LayerNormBasicLSTMCell in deprecated module tf.contrib. tf.contrib.rnn.LayerNormBasicLSTMCell cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
    80:40: INFO: Renamed 'tf.AUTO_REUSE' to 'tf.compat.v1.AUTO_REUSE'
    96:28: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    101:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    103:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    108:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    111:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    112:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    114:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    115:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    125:31: INFO: Renamed 'tf.placeholder_with_default' to 'tf.compat.v1.placeholder_with_default'
    164:26: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    183:31: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    203:12: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    210:26: INFO: Added keywords to args of function 'tf.shape'
    311:35: INFO: Added keywords to args of function 'tf.transpose'
    393:26: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    412:31: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    429:26: INFO: Added keywords to args of function 'tf.shape'
    605:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    617:30: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    636:35: INFO: Renamed 'tf.layers.flatten' to 'tf.compat.v1.layers.flatten'
    658:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    658:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    660:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    660:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    660:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    661:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    661:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    661:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    664:24: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    664:42: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    664:76: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    724:19: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/policy/__init__.py'
     outputting to 'btgym2/algorithms/policy/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/policy/base.py'
     outputting to 'btgym2/algorithms/policy/base.py'
    --------------------------------------------------------------------------------
    
    81:28: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    86:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    88:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    93:34: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    96:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    97:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    99:30: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    100:31: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    107:31: INFO: Renamed 'tf.placeholder_with_default' to 'tf.compat.v1.placeholder_with_default'
    117:26: INFO: Added keywords to args of function 'tf.shape'
    170:26: INFO: Added keywords to args of function 'tf.shape'
    242:29: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    255:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    255:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    257:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    257:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    257:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    258:26: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    258:44: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    258:75: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    261:24: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    261:42: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    261:76: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    276:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    296:19: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    356:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    383:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/algorithms/policy/meta.py'
     outputting to 'btgym2/algorithms/policy/meta.py'
    --------------------------------------------------------------------------------
    
    22:13: INFO: Renamed 'tf.variable_scope' to 'tf.compat.v1.variable_scope'
    27:33: INFO: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder'
    29:30: INFO: Added keywords to args of function 'tf.reduce_mean'
    46:36: INFO: Renamed 'tf.scatter_nd_add' to 'tf.compat.v1.scatter_nd_add'
    52:33: INFO: Renamed 'tf.scatter_nd_update' to 'tf.compat.v1.scatter_nd_update'
    54:28: INFO: Renamed 'tf.assign' to 'tf.compat.v1.assign'
    60:19: INFO: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'
    66:34: INFO: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'
    72:32: INFO: Renamed 'tf.distributions.Bernoulli' to 'tf.compat.v1.distributions.Bernoulli'
    73:22: INFO: Added keywords to args of function 'tf.reduce_max'
    77:28: INFO: Renamed 'tf.get_collection' to 'tf.compat.v1.get_collection'
    77:46: INFO: Renamed 'tf.GraphKeys' to 'tf.compat.v1.GraphKeys'
    77:80: INFO: Renamed 'tf.get_variable_scope' to 'tf.compat.v1.get_variable_scope'
    88:24: INFO: Added keywords to args of function 'tf.reduce_mean'
    91:25: INFO: Added keywords to args of function 'tf.gradients'
    94:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    94:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    95:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    95:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    97:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    97:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    98:16: INFO: tf.summary.histogram requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    98:16: INFO: Renamed 'tf.summary.histogram' to 'tf.compat.v1.summary.histogram'
    99:16: INFO: tf.summary.scalar requires manual check. The TF 1.x summary API cannot be automatically migrated to TF 2.0, so symbols have been converted to tf.compat.v1.summary.* and must be migrated manually. Typical usage will only require changes to the summary writing logic, not to individual calls like scalar(). For examples of the new summary API, see the Effective TF 2.0 migration document or check the TF 2.0 TensorBoard tutorials.
    99:16: INFO: Renamed 'tf.summary.scalar' to 'tf.compat.v1.summary.scalar'
    99:48: INFO: Renamed 'tf.global_norm' to 'tf.linalg.global_norm'
    103:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    108:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    125:15: INFO: Renamed 'tf.get_default_session' to 'tf.compat.v1.get_default_session'
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/rendering/__init__.py'
     outputting to 'btgym2/rendering/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/rendering/renderer.py'
     outputting to 'btgym2/rendering/renderer.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/rendering/plotter.py'
     outputting to 'btgym2/rendering/plotter.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/envs/__init__.py'
     outputting to 'btgym2/envs/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/envs/multidiscrete.py'
     outputting to 'btgym2/envs/multidiscrete.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/envs/base.py'
     outputting to 'btgym2/envs/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/envs/portfolio.py'
     outputting to 'btgym2/envs/portfolio.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/test_data.py'
     outputting to 'btgym2/datafeed/test_data.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/test_casual_data.py'
     outputting to 'btgym2/datafeed/test_casual_data.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/multi.py'
     outputting to 'btgym2/datafeed/multi.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/__init__.py'
     outputting to 'btgym2/datafeed/__init__.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/base.py'
     outputting to 'btgym2/datafeed/base.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/derivative.py'
     outputting to 'btgym2/datafeed/derivative.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/casual.py'
     outputting to 'btgym2/datafeed/casual.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Processing file 'btgym/datafeed/stateful.py'
     outputting to 'btgym2/datafeed/stateful.py'
    --------------------------------------------------------------------------------
    
    
    --------------------------------------------------------------------------------
    
    
    
    algorithm refactoring compatibility 
    opened by mcrowson 14
  • Erroneous static_RNN policy behavior explanation.

    Erroneous static_RNN policy behavior explanation.

    time_flat arg of base aac.py class had misleading explanation [now corrected]:

    https://github.com/Kismuz/btgym/blob/master/btgym/algorithms/aac.py#L175

    It is strongly recommended to use time_flat=False.

    error algorithm resolved 
    opened by Kismuz 0
  • Tutorial: Integration with TF-Agents RL Framework

    Tutorial: Integration with TF-Agents RL Framework

    BTgym have two main sections, the Gym framework and the RL algorithm framework. The RL part is tailored to the unique gym requirements of BTgym, but as new research in the field is emerging there will be a benefit in exploring new algorithms that aren't implemented by this project.

    The following tutorial is my own attempt of testing the integration between the Gym part of BTgym with an external RL framework. This tutorial is purely a Proof-of-Concept for testing this integration.

    I took the most basic tutorial from the TF-Agent project - dqn tutorial and tried to run it with BTgym.

    A few notes: DQN network has a simple implementation, it expect a simple array for action space and a simple array for observation space.

    To resolve the action space issue, I have submitted a PR to TF-Agent that got rejected for basically being an overkill to the specification of the network. So you will need to manually apply those changes from here

    To resolve the observation space you can manually collapse the dictionary or just work with 'external' as this is purely a proof of concept tutorial. I had changed this linepy_environment.py to get only the external tag from the dictionary

    algorithm information contribution 
    opened by JaCoderX 4
Owner
Andrew
Applied mathematics and machine learning Research and Software Engineer with focus on deep reinforcement learning and quantitative finance domain.
Andrew
Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

null 47 Nov 22, 2022
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 2, 2023
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
Certis - Certis, A High-Quality Backtesting Engine

Certis - Backtesting For y'all Certis is a powerful, lightweight, simple backtes

Yeachan-Heo 46 Oct 30, 2022
A Python-based development platform for automated trading systems - from backtesting to optimisation to livetrading.

AutoTrader AutoTrader is Python-based platform intended to help in the development, optimisation and deployment of automated trading systems. From sim

Kieran Mackle 485 Jan 9, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 3, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

Ilya Kostrikov 3k Dec 31, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project ?? ⚡ ?? Click on Use this template to initialize new re

Hyunsoo Cho 1 Dec 20, 2021
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 9, 2023
Repo for "Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks"

Summary This is the code for the paper Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks by Yanxiang Wang, Xian Zh

zhangxian 54 Jan 3, 2023