An educational resource to help anyone learn deep reinforcement learning.

Overview

Status: Maintenance (expect bug fixes and minor updates)

Welcome to Spinning Up in Deep RL!

This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL).

For the unfamiliar: reinforcement learning (RL) is a machine learning approach for teaching agents how to solve tasks by trial and error. Deep RL refers to the combination of RL with deep learning.

This module contains a variety of helpful resources, including:

  • a short introduction to RL terminology, kinds of algorithms, and basic theory,
  • an essay about how to grow into an RL research role,
  • a curated list of important papers organized by topic,
  • a well-documented code repo of short, standalone implementations of key algorithms,
  • and a few exercises to serve as warm-ups.

Get started at spinningup.openai.com!

Citing Spinning Up

If you reference or use Spinning Up in your research, please cite:

@article{SpinningUp2018,
    author = {Achiam, Joshua},
    title = {{Spinning Up in Deep Reinforcement Learning}},
    year = {2018}
}
Comments
  • Running spinningup in Linux Subsystem on Windows (Success)

    Running spinningup in Linux Subsystem on Windows (Success)

    For people who are on windows 10 and do not have linux but want to make things work.

    1. you can enable the WSL in your windows 10 following this.
    2. Install Xming X window server for windows from here. and make sure it is running.
    3. Once WSL is working : Open cmd, type in "bash", this will switch the cmd to WSL terminal, then run the following it will enable GUI for WSL . Copied from this stackoverflow answer.
        sudo apt-get install x11-apps
        export DISPLAY=localhost:0.0 
        nano ~/.bashrc  #(add  export DISPLAY=localhost:0.0   at the end. Ctrl+X to exit/save)
        sudo apt-get install gnome-calculator #will get you GTK
    
    1. Download miniconda for linux from here. It will be an ".sh" file.
    2. from the terminal go to the folder you downloaded the file to and run "bash <name_of_downloaded_file>", this will install conda.
    3. follow the spinningup tutorial for rest of installation.
    opened by ibrahiminfinite 33
  • OMP: Error #15: Initializing libiomp5.dylib

    OMP: Error #15: Initializing libiomp5.dylib

    Followed Mujoco install, followed by full Gym installation.

    Used Miniconda, py 3.6

    On fresh OSX Mojave (right out of the box!)

    Ran the following code:

    import gym
    import tensorflow as tf
    from spinup import ddpg
    
    env_name = 'Pendulum-v0'
    env_fn       = lambda : gym.make(env_name)
    
    ac_kwargs = {
            'hidden_sizes':[64,64], 
            'activation'  :tf.nn.relu
        }
    
    logger_kwargs = {
        'output_dir'  : 'logs', 
        'exp_name'  :'pendulum_test'
        }
    
    addl_kwargs = {
        'seed' : 42
    }
    
    ddpg(env_fn, ac_kwargs=ac_kwargs, logger_kwargs=logger_kwargs, **addl_kwargs)
    

    Received following error on first build:

    INFO:tensorflow:Assets added to graph. INFO:tensorflow:No assets to write. INFO:tensorflow:SavedModel written to: logs/simple_save/saved_model.pb

    OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.

    OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into theprogram. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

    [MBP:05587] *** Process received signal *** [MBP:05587] Signal: Abort trap: 6 (6) [MBP:05587] Signal code: (0) [MBP:05587] [ 0] 0 libsystem_platform.dylib 0x00007fff6ad79b3d _sigtramp + 29 [MBP:05587] [ 1] 0 libiomp5.dylib 0x0000000110b7b018 __kmp_openmp_version + 88572 [MBP:05587] [ 2] 0 libsystem_c.dylib 0x00007fff6ac381c9 abort + 127 [MBP:05587] [ 3] 0 libiomp5.dylib 0x0000000110b24df3 __kmp_abort_process + 35 [MBP:05587] *** End of error message *** Abort trap: 6

    The workaround is including:

    import os
    os.environ['KMP_DUPLICATE_LIB_OK']='True'
    

    However, is there a more permanent fix? Why might there be multiple instances of OpenMP?

    opened by ZachariahRosenberg 19
  • Installation issue

    Installation issue

    I get a weird error when i test my installation using

    python -m spinup.run ppo --hid [32,32] --env Walker2d-v2 --exp_name installtest

    Error :

    `================================================================================ ExperimentGrid [installtest] runs over parameters:

    env_name [env]

    Walker2d-v2
    

    ac_kwargs:hidden_sizes [ac-hid]

    [32, 32]
    

    Variants, counting seeds: 1 Variants, not counting seeds: 1

    ================================================================================

    Preparing to run the following experiments...

    installtest

    ================================================================================

    Launch delayed to give you a few seconds to review your experiments.

    To customize or disable this behavior, change WAIT_BEFORE_LAUNCH in spinup/user_config.py.

    ================================================================================ Running experiment:

    installtest

    with kwargs:

    { "ac_kwargs": { "hidden_sizes": [ 32, 32 ] }, "env_name": "Walker2d-v2", "seed": 0 }

    Traceback (most recent call last): File "/Users/haresh/Documents/spinningup/spinup/utils/run_entrypoint.py", line 10, in thunk = pickle.loads(zlib.decompress(base64.b64decode(args.encoded_thunk))) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1388, in loads return Unpickler(file).load() File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 864, in load dispatchkey File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 892, in load_proto raise ValueError, "unsupported pickle protocol: %d" % proto ValueError: unsupported pickle protocol: 4

    ================================================================================

    There appears to have been an error in your experiment.

    Check the traceback above to see what actually went wrong. The traceback below, included for completeness (but probably not useful for diagnosing the error), shows the stack leading up to the experiment launch.

    ================================================================================

    Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/haresh/Documents/spinningup/spinup/run.py", line 230, in parse_and_execute_grid_search(cmd, args) File "/Users/haresh/Documents/spinningup/spinup/run.py", line 162, in parse_and_execute_grid_search eg.run(algo, **run_kwargs) File "/Users/haresh/Documents/spinningup/spinup/utils/run_utils.py", line 546, in run data_dir=data_dir, datestamp=datestamp, **var) File "/Users/haresh/Documents/spinningup/spinup/utils/run_utils.py", line 171, in call_experiment subprocess.check_call(cmd, env=os.environ) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['python', '/Users/haresh/Documents/spinningup/spinup/utils/run_entrypoint.py', 'eJyNUs9P1EAUnm6XpYCgEaORE4mX5eAuIfG2cnCNlyYcUOPJTIbOdDt2OlM7MygmJCYILMnEC08v/rO+7iLgjU46fe97/d7v791fZzGZPeFJpozntcxKJQa3ZAhrNJdK0dzrzEmj4RL6BQn3acVKQW0p5iaEEV2lB14qJzV1R7VALCRjw8W7VjmFfeinZHbidH38hkfTTtlrtnmHE0UuUOPRU3JBzsl5lHd4zLs/EsQWRqS1vCQumsYnUUSm3RyRnyhx8pZAfy8kQh9SzSoBKSmW/gXpYJDlafSJTMlJhD/unULotlljKlthNHxvRWOHBWuELYavTeYroZ0d2lpqLfXE1zMRP95JZYeN13QmDeojrGukWHXA2S6kf8YEipUQT44qOIMth3WGFxnDnomvtWhk63YwUgYhuztwhdclrZW3N9i1r9BrK8k1OCjWQ1LVkuamKVu3xYOwfMOF9Pd4jXSSqD0Po16cxEguv7BmYiEsal/RrPYQFmYUOMcEi/UzaHNL42PU9+EY+mFxoswBpoBKsREez+sdzKtsgztj0FhsXIINCRc588pZ2AtdLjOHpNCrDPftlvzPve4Vrg8uk/WNoIdMeWHhI/QxMvZr5QNTpWh2+PPDHQhLLKNX6aPbe4XkXGhq5bc5Jd1MN4UNXSsExyGHVWUmE9HcUPph2XhXe0e5bCC8usN0OXNsKLV1OConrLstU7uNM8b5zdcqrNyygfdpFB5dFczUxOBK1KZ9IcTtfQkiJJ89U3PyszssA3i3P/gLgUhOrw==']' returned non-zero exit status 1.`

    opened by HareshKarnan 13
  • Plotting error after first installation

    Plotting error after first installation

    As a heads up some users might run into the following error when attempting to plot the performance of their agent when checking their installation the first time:

    ImportError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.

    In order to overcome this, they can follow the steps outlined here

    I have no clue if this only happens in a virtual environment (I was personally using a Conda environment) and didn't know where else to put it. Hope this helps!

    opened by cyrilzakka 10
  • Eager Execution Support

    Eager Execution Support

    Having the code base in some dynamic graph style library would be great. Pytorch does this well, but since google has released eager execution, having the codebase in eager execution mode will be really nice to have since the codebase is a learning tool. I was having a bit of trouble understanding the session based control of tensorflow (coming from a pytorch background) for the vanilla policy gradient, inspite of it being written really well :')

    I would be happy to slowly convert some of the codebase to support eager execution. Do you think this would be useful ?

    opened by karanchahal 9
  • Cannot run the first example on Mac OS.

    Cannot run the first example on Mac OS.

    I am installing Spinning Up on Mac OS. I followed this link After this, to see if you’ve successfully installed Spinning Up, I tried running PPO in the LunarLander-v2 environment with python -m spinup.run ppo --hid "[32,32]" --env LunarLander-v2 --exp_name installtest --gamma 0.999 Then an error pops up image image image

    I have no idea about this error. Can anyone give me help?

    opened by zhjmcjk 9
  • Valid Gym environments to use!

    Valid Gym environments to use!

    Hi,

    I've tried spinningup in running many experiments using the different algorithms in different Gym environments. It works well in most environments, like Atari, Box2D, Classic control and MuJoCo, however it didn't work with the new gym environments of "Robotics".

    For example when I run the following command on terminal: python -m spinup.run ppo --env FetchReach-v1 --exp_name FetchReach

    It shows:

    ================================================================================
    ExperimentGrid [FetchReachExp] runs over parameters:
    
     env_name                                 [env] 
    
    	FetchReach-v1
    
     Variants, counting seeds:               1
     Variants, not counting seeds:           1
    
    ================================================================================
    
    Preparing to run the following experiments...
    
    FetchReachExp
    
    ================================================================================
    
    Launch delayed to give you a few seconds to review your experiments.
    
    To customize or disable this behavior, change WAIT_BEFORE_LAUNCH in
    spinup/user_config.py.
    
    ================================================================================
    Launching in...:                                                       Launching in...: ██▌                                               | 00Launching in...: █████                                             | 00Launching in...: ███████▌                                          | 00Launching in...: ██████████                                        | 00Launching in...: ████████████▌                                     | 00Launching in...: ███████████████                                   | 00Launching in...: █████████████████▌                                | 00Launching in...: ████████████████████                              | 00Launching in...: ██████████████████████▌                           | 00Launching in...: █████████████████████████                         | 00Launching in...: ███████████████████████████▌                      | 00Launching in...: ██████████████████████████████                    | 00Launching in...: ████████████████████████████████▌                 | 00Launching in...: ███████████████████████████████████               | 00Launching in...: █████████████████████████████████████▌            | 00Launching in...: ████████████████████████████████████████          | 00Launching in...: ██████████████████████████████████████████▌       | 00Launching in...: █████████████████████████████████████████████     | 00Launching in...: ███████████████████████████████████████████████▌  | 00Launching in...: ██████████████████████████████████████████████████| 00                                                                       Running experiment:
    
    FetchReachExp
    
    with kwargs:
    
    {
        "env_name":	"FetchReach-v1",
        "seed":	0
    }
    
    
    Logging data to /home/sketcher/MachineLearning/DRL/OpenAI/spinningup/data/FetchReachExp/FetchReachExp_s0/progress.txt
    Saving config:
    
    {
        "ac_kwargs":	{},
        "actor_critic":	"mlp_actor_critic",
        "clip_ratio":	0.2,
        "env_fn":	"<function call_experiment.<locals>.thunk_plus.<locals>.<lambda> at 0x7f245efad488>",
        "epochs":	100,
        "exp_name":	"FetchReachExp",
        "gamma":	0.99,
        "lam":	0.97,
        "logger":	{
            "<spinup.utils.logx.EpochLogger object at 0x7f245efbd9b0>":	{
                "epoch_dict":	{},
                "exp_name":	"FetchReachExp",
                "first_row":	true,
                "log_current_row":	{},
                "log_headers":	[],
                "output_dir":	"/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/data/FetchReachExp/FetchReachExp_s0",
                "output_file":	{
                    "<_io.TextIOWrapper name='/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/data/FetchReachExp/FetchReachExp_s0/progress.txt' mode='w' encoding='UTF-8'>":	{
                        "mode":	"w"
                    }
                }
            }
        },
        "logger_kwargs":	{
            "exp_name":	"FetchReachExp",
            "output_dir":	"/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/data/FetchReachExp/FetchReachExp_s0"
        },
        "max_ep_len":	1000,
        "pi_lr":	0.0003,
        "save_freq":	10,
        "seed":	0,
        "steps_per_epoch":	4000,
        "target_kl":	0.01,
        "train_pi_iters":	80,
        "train_v_iters":	80,
        "vf_lr":	0.001
    }
    Traceback (most recent call last):
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/utils/run_entrypoint.py", line 11, in <module>
        thunk()
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/utils/run_utils.py", line 162, in thunk_plus
        thunk(**kwargs)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/algos/ppo/ppo.py", line 183, in ppo
        x_ph, a_ph = core.placeholders_from_spaces(env.observation_space, env.action_space)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/algos/ppo/core.py", line 27, in placeholders_from_spaces
        return [placeholder_from_space(space) for space in args]
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/algos/ppo/core.py", line 27, in <listcomp>
        return [placeholder_from_space(space) for space in args]
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/algos/ppo/core.py", line 24, in placeholder_from_space
        raise NotImplementedError
    NotImplementedError
    
    
    
    ================================================================================
    
    
    There appears to have been an error in your experiment.
    
    Check the traceback above to see what actually went wrong. The 
    traceback below, included for completeness (but probably not useful
    for diagnosing the error), shows the stack leading up to the 
    experiment launch.
    
    ================================================================================
    
    
    
    Traceback (most recent call last):
      File "/home/sketcher/anaconda3/envs/OpAI-env/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/sketcher/anaconda3/envs/OpAI-env/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/run.py", line 230, in <module>
        parse_and_execute_grid_search(cmd, args)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/run.py", line 162, in parse_and_execute_grid_search
        eg.run(algo, **run_kwargs)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/utils/run_utils.py", line 546, in run
        data_dir=data_dir, datestamp=datestamp, **var)
      File "/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/utils/run_utils.py", line 171, in call_experiment
        subprocess.check_call(cmd, env=os.environ)
      File "/home/sketcher/anaconda3/envs/OpAI-env/lib/python3.7/subprocess.py", line 347, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['/home/sketcher/anaconda3/envs/OpAI-env/bin/python', '/home/sketcher/MachineLearning/DRL/OpenAI/spinningup/spinup/utils/run_entrypoint.py', 'eJydUk1v1DAQdTbbbShqQRSB6JXL9sCmHLgtldBCJZSySAsHLshyY+dD69ghsUt7qIQEbbeSxaUDFy7wTxnvVv24IWLFmXmT8czzmy/dHy4k88c9TKW2vC7TqRSDaza4NZqVUtLMqtSUWsE59Avi7tCKTQVtp2IRQhjRVbpnS2lKRc1hLRBz0Uhz8d47xzCBfkLmK0zWRzs8mHWmvWaLdziR5Aw9HjwiZ+SUnAZZh4e8+y1CbGlIfOQ5McEs/BoEZNbNEPmOFifvCPTHLhJqnypWCUhIcet6kZVZ8JvMyJ8Afxwfg+v6rrGVTTeOC12JGAmYtBBN/IalRanErmCNKlUev5zsxm9roV68jtu6VB6z9dzEjzWlbOPGKjq3BvUhMh1KVu1xtg3JrxGB4rYL88MKTmDTIHP3LGV4i+KgFk1ZCWUGQ6kRarcHprBqSmtp2yvs8izX89wyBQaKdRdVdUkz3Uz9scVdt3KVC8nP0RrpRIFf94JeGIWYPP3MmrwFt6xsRdPagluap8ApNlisn4DvLQmP0J/AEfTdci71HraATrHhHiz4DhYsfXGjNQaLjXNoXcRFxqw0LYxdl5epwSTXqzS3fm5u5l7eFQ4UjldrG0H3mbSihY/Qx8p4X6s7XouJQCWe7D9FsVohOErqVqXOc9HQCza+zxVtTW0N5WUD7sN/acmZYfFVyVcH9U2PtluoKiq2GK3r7WEUrE0Cd/+CJJO5xjGotX/BhX4/B+GiT5bJRfrjfxgAsGYy+AvUx03/']' returned non-zero exit status 1.
    

    Does SpinningUp support this enviroments (Robotics) or it is a problem from my side?

    opened by RamiSketcher 7
  • failed to check the installation with mujoco envs

    failed to check the installation with mujoco envs

    I'm using Ubuntu 14.04 + Python 3.6.7 (Anaconda3-5.3.0-Linux-x86_64).

    I have successfully installed mujoco, mujoco_py, gym and spinningup.

    mujoco                    150
    mujoco-py                 1.50.1.68
    gym                       0.10.9 
    spinup                    0.1
    

    And the following code could work:

    env = gym.make('Humanoid-v2') 
    env.reset() 
    env.render() 
    

    But when I checked the spinningup installation by:

    python -m spinup.run ppo --hid [32,32] --env Walker2d-v2 --exp_name installtest
    

    I got the following errors:

    Import error. Trying to rebuild mujoco_py. running build_ext building 'mujoco_py.cymj' extension gcc -pthread -B /home/pxlong/anaconda3/envs/spinningup/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py -I/home/pxlong/.mujoco/mjpro150/include -I/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/numpy/core/include -I/home/pxlong/anaconda3/envs/spinningup/include/python3.6m -c /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/cymj.c -o /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/generated/_pyxbld_1.50.1.68_36_linuxcpuextensionbuilder/temp.linux-x86_64-3.6/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/cymj.o -fopenmp -w gcc -pthread -B /home/pxlong/anaconda3/envs/spinningup/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py -I/home/pxlong/.mujoco/mjpro150/include -I/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/numpy/core/include -I/home/pxlong/anaconda3/envs/spinningup/include/python3.6m -c /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/gl/osmesashim.c -o /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/generated/_pyxbld_1.50.1.68_36_linuxcpuextensionbuilder/temp.linux-x86_64-3.6/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/gl/osmesashim.o -fopenmp -w gcc -pthread -shared -B /home/pxlong/anaconda3/envs/spinningup/compiler_compat -L/home/pxlong/anaconda3/envs/spinningup/lib -Wl,-rpath=/home/pxlong/anaconda3/envs/spinningup/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/generated/_pyxbld_1.50.1.68_36_linuxcpuextensionbuilder/temp.linux-x86_64-3.6/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/cymj.o /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/generated/_pyxbld_1.50.1.68_36_linuxcpuextensionbuilder/temp.linux-x86_64-3.6/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/gl/osmesashim.o -L/home/pxlong/.mujoco/mjpro150/bin -Wl,-R/home/pxlong/.mujoco/mjpro150/bin -lmujoco150 -lglewosmesa -lOSMesa -lGL -o /home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/generated/_pyxbld_1.50.1.68_36_linuxcpuextensionbuilder/lib.linux-x86_64-3.6/mujoco_py/cymj.cpython-36m-x86_64-linux-gnu.so -fopenmp Traceback (most recent call last): File "/home/pxlong/Dropbox/git/gym/gym/envs/mujoco/mujoco_env.py", line 11, in import mujoco_py File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/init.py", line 3, in from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/builder.py", line 503, in cymj = load_cython_ext(mjpro_path) File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/builder.py", line 106, in load_cython_ext mod = load_dynamic_ext('cymj', cext_so_path) File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/mujoco_py-1.50.1.68-py3.6.egg/mujoco_py/builder.py", line 124, in load_dynamic_ext return loader.load_module() ImportError: dlopen: cannot load any more object with static TLS

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_entrypoint.py", line 11, in thunk() File "/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_utils.py", line 162, in thunk_plus thunk(**kwargs) File "/home/pxlong/Dropbox/git/spinningup/spinup/algos/ppo/ppo.py", line 175, in ppo env = env_fn() File "/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_utils.py", line 155, in kwargs['env_fn'] = lambda : gym.make(env_name) File "/home/pxlong/Dropbox/git/gym/gym/envs/registration.py", line 167, in make return registry.make(id) File "/home/pxlong/Dropbox/git/gym/gym/envs/registration.py", line 119, in make env = spec.make() File "/home/pxlong/Dropbox/git/gym/gym/envs/registration.py", line 85, in make cls = load(self._entry_point) File "/home/pxlong/Dropbox/git/gym/gym/envs/registration.py", line 14, in load result = entry_point.load(False) File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/pkg_resources/init.py", line 2343, in load return self.resolve() File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/site-packages/pkg_resources/init.py", line 2349, in resolve module = import(self.module_name, fromlist=['name'], level=0) File "/home/pxlong/Dropbox/git/gym/gym/envs/mujoco/init.py", line 1, in from gym.envs.mujoco.mujoco_env import MujocoEnv File "/home/pxlong/Dropbox/git/gym/gym/envs/mujoco/mujoco_env.py", line 13, in raise error.DependencyNotInstalled("{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)".format(e)) gym.error.DependencyNotInstalled: dlopen: cannot load any more object with static TLS. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)

    ================================================================================ There appears to have been an error in your experiment. Check the traceback above to see what actually went wrong. The traceback below, included for completeness (but probably not useful for diagnosing the error), shows the stack leading up to the experiment launch. ================================================================================

    Traceback (most recent call last): File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/pxlong/Dropbox/git/spinningup/spinup/run.py", line 230, in parse_and_execute_grid_search(cmd, args) File "/home/pxlong/Dropbox/git/spinningup/spinup/run.py", line 162, in parse_and_execute_grid_search eg.run(algo, **run_kwargs) File "/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_utils.py", line 546, in run data_dir=data_dir, datestamp=datestamp, **var) File "/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_utils.py", line 171, in call_experiment subprocess.check_call(cmd, env=os.environ) File "/home/pxlong/anaconda3/envs/spinningup/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['python', '/home/pxlong/Dropbox/git/spinningup/spinup/utils/run_entrypoint.py', 'eJyNUs9P1EAUnm6XpYCgEaORE4mX5eCWkHgDLmu8NOGAJp7MpNuZtmOnM7Uzg2BCYoLAkky88PTiP+vrLgLe6KTT977X7/3+3v81Dcns8S8yqR1rRFZJProng1+juZCS5k5lVmgF1zAsiX9M67Ti1FR8bkIY0VU6cUJaoag9aThiPhprxj90yjkcwjAhsxMm6+N3LJj2qkG7zXqMSHKFGgtekitySS6DvMdC1v8RIbawSzrLHrHBNDwLAjLt54j8RImR9wSGBz7i6oiqtOaQkHLpX5AeBlmeBp/JlJwF+OPBOfh+lzWmsuX34lLXPG6OpVZF/LbVzUQfx4WwsWmEUkIVrpmJ+HFWSBO3TtGZNGpOsLBdmdYTlu5D8mdMoFzxYXFSwwVsWSzUv8lSbBo/bngraq7saFdqhMz+yJZOVbSRztxht778oCslV2ChXPdR3Qia67bq3JZP/PIdF5Lf4zXSi4LuPA0GYRQiufqatoUBv6hcTbPGgV+YUeASEyzXL6DLLQlPUT+EUxj6xULqCaaASrnhn8/rHc2r7IJbrdFYblyD8RHjeeqkNXDg+0xkFkl+UGvmujX5n3vbK9wf3CbjWk6PUum4gU8wxMjYr5WPqax4u8NeH+2AX0ozepM+un1UCsa4okZ8m1OSzWSTG983nDOcsl+Vuih4e0cZ+mXtbOMsZaIFP37IeFlq01goY3FWlht7X6ZmG4eMA5wvll+5ZwPnksA/u6k4lYXGnWh094IPu/sauI++uFTOya8esA3g7OHoL/bmT1k=']' returned non-zero exit status 1.

    opened by pxlong 7
  • Fails to run SAC from terminal.

    Fails to run SAC from terminal.

    Hi Everyone! I work in a PyCharm environment over Ubuntu 18.04 when I try to execute a variation on this command:

    python -m spinup.run sac --hid "[32,32]" --env LunarLander-v2 --exp_name installtest --gamma 0.999

    variation means I've tried different combinations of --env, with/without --gamma with/without --hid and its always the next Error Callstack

    Here is the Error Callstack:

    Saving config:
    
    {
        "ac_kwargs":        {
            "hidden_sizes": [
                32,
                32
            ]
        },
        "actor_critic":     "mlp_actor_critic",
        "alpha":    0.2,
        "batch_size":       100,
        "env_fn":   "<function call_experiment.<locals>.thunk_plus.<locals>.<lambda> at 0x7fc51adfe7b8>",
        "epochs":   100,
        "exp_name": "installtest",
        "gamma":    0.999,
        "logger":   {
            "<spinup.utils.logx.EpochLogger object at 0x7fc51ae0d8d0>":     {
                "epoch_dict":       {},
                "exp_name": "installtest",
                "first_row":        true,
                "log_current_row":  {},
                "log_headers":      [],
                "output_dir":       "/home/asheryartsev/Dev/spinningup/data/installtest/installtest_s0",
                "output_file":      {
                    "<_io.TextIOWrapper name='/home/asheryartsev/Dev/spinningup/data/installtest/installtest_s0/progress.txt' mode='w' encoding='UTF-8'>":  {
                        "mode":     "w"
                    }
                }
            }
        },
        "logger_kwargs":    {
            "exp_name":     "installtest",
            "output_dir":   "/home/asheryartsev/Dev/spinningup/data/installtest/installtest_s0"
        },
        "lr":       0.001,
        "max_ep_len":       1000,
        "polyak":   0.995,
        "replay_size":      1000000,
        "save_freq":        1,
        "seed":     0,
        "start_steps":      10000,
        "steps_per_epoch":  5000
    }
    WARNING:tensorflow:From /home/asheryartsev/Dev/spinningup/spinup/algos/sac/sac.py:135: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
    
    Traceback (most recent call last):
      File "/home/asheryartsev/Dev/spinningup/spinup/utils/run_entrypoint.py", line 11, in <module>
        thunk()
      File "/home/asheryartsev/Dev/spinningup/spinup/utils/run_utils.py", line 162, in thunk_plus
        thunk(**kwargs)
      File "/home/asheryartsev/Dev/spinningup/spinup/algos/sac/sac.py", line 140, in sac
        act_dim = env.action_space.shape[0]
    IndexError: tuple index out of range
    
    
    
    ================================================================================
    
    
    There appears to have been an error in your experiment.
    
    Check the traceback above to see what actually went wrong. The 
    traceback below, included for completeness (but probably not useful
    for diagnosing the error), shows the stack leading up to the 
    experiment launch.
    
    ================================================================================
    
    
    
    Traceback (most recent call last):
      File "/home/asheryartsev/anaconda3/envs/spinup/lib/python3.6/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/asheryartsev/anaconda3/envs/spinup/lib/python3.6/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/asheryartsev/Dev/spinningup/spinup/run.py", line 230, in <module>
        parse_and_execute_grid_search(cmd, args)
      File "/home/asheryartsev/Dev/spinningup/spinup/run.py", line 162, in parse_and_execute_grid_search
        eg.run(algo, **run_kwargs)
      File "/home/asheryartsev/Dev/spinningup/spinup/utils/run_utils.py", line 546, in run
        data_dir=data_dir, datestamp=datestamp, **var)
      File "/home/asheryartsev/Dev/spinningup/spinup/utils/run_utils.py", line 171, in call_experiment
        subprocess.check_call(cmd, env=os.environ)
      File "/home/asheryartsev/anaconda3/envs/spinup/lib/python3.6/subprocess.py", line 311, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['/home/asheryartsev/anaconda3/envs/spinup/bin/python', '/home/asheryartsev/Dev/spinningup/spinup/utils/run_entrypoint.py', 'eJyNUs9v0zAUdpquyzYYiKEhdprEpROinZA4MYZQERwy7TC4IsuL3cTUcUJsF4o0CWls7SSLyx4Iif+JO38FR668tGMbJ4jl5L3v5Xu/PzQ/fw3J9PG3ElU4XspkoETnkgx+mfalUrTvdGJloeEU2hnx12jOBoKagZiZEEb0Kt13UlmpqR2VAjEf9QouXtbKEexBOybTE8YrvWc8mDQGrWqTNzhR5AQ1HtwmJ2RMxkG/wUPe/BghNrdFassjYoNJeBgEZNLsI/IJJU5eEGjv+kjoIdUsFxCTbOFPkAYGWZwEr8mEHAb44+4R+GadNaay4R92syIXXWYyUY1YZY0Ydp/iNaXUWurUlVMRP85KZbqV03QqdcoRlrWlWL7P2TbE33oEsiUfpqMcjmHDYpn+QcKwZeJdKSqZC207W6pAyGx3bOb0gJbKmQvs3Jdv1YX0NVjIVnyUl5L2i2pQu82u+8ULLsRfesukEQX1uRG0wihE8uAtq1IDfl67nCalAz83pcAYE8xWjqHOLQ4PUN+DA2j7+VQV+5gCKtmaX53V25lVWQe3RYHGbO0UjI+46DOnrIFd3+QysUjyrbzgrl6Sv7nnvcLtwV0yrhJ0yJQTBl5Bu448l7I8Z/D88c9f33+sju9iA5d3nGbVDtNcVPeG98EvsISelYShrmSSc6Gpke9nbuL1eF0Y3zRCcJy7v6qKNBXVBaXtFwtnS2cplxX4J/8eOGeWdaU2FqdnhbGXZWo2cew40tmi+aVLNnAuDvzNsx4wlRamY1hSX/Bh/T4F4aM3jqkZ+c5/7Ac4u9f5DVTuVy0=']' returned non-zero exit status 1.
    
    opened by AsherYartsevTech 6
  • Are there some problems in the ppo code?

    Are there some problems in the ppo code?

    Line 259 in ppo.py, buf.store(o,a,r,v_t,logp_t) At the first step in every epoch, we set r=0. So the operation in line 259 results that the first record of reward in the rew_buf is always 0. In fact, we should record the reward after executing pi in the first state.
    The operation above causes the problems in the function finish_path. In line62 and line 66, the first record in the rew_buf may result some problems and leave out the last reward during the end of every epoch. To solve this problems, I change the code slightly as below: line262: o_, r, d, =env.step(a[0]) line263: buf.store(o, a, r, v_t, logp_t) line264: ep_ret+ = r line265: ep_len+ = 1 line266: o = o After changing these codes, I also change: line 58: rews = self.rew_buf[path_slice] line 62: deltas=rews + self.gamma * vals[1:] - vals[:-1] line 66:self.ret_buf[path_slice] = core.discount_cumsum(rews, self.gamma)

    I don't know whether I am wrong. I am sincerely hoping that you can answer my question. Thank you!

    opened by chaobiubiu 6
  • multi-cpu problem in experiment grid

    multi-cpu problem in experiment grid

    @jachiam Hi! It's me again! 2 days ago I posted an issue on using multiple cpu on ExperimentGrid that seems to only give the wrong log when run in Pycharm but fine in terminal. I did some more experiments today, and found that although when run from terminal, the log seems to be correct, but there could still be a bug, the proof is that a multi-cpu run will always use more time compared to single cpu run, when solving the same number of jobs.

    Here is the code I tried, when you have time, could you try run it and see if you can reproduce the bug? My machine has 4 cores, and using 2 cores to run 4 jobs should be faster than using 1 core to run 4 jobs. The result I get is: single-cpu about 15 seconds, multi-cpu about 20 seconds. I have run this code from terminal, the result indicates that with the same number of jobs, multi-cpu uses more time than single-cpu, which seems a bug to me. I believe what the code might be doing is that for every variant, the same variant has been run num_cpu times.

    from spinup.utils.run_utils import ExperimentGrid
    from spinup import ppo
    
    import time
    if __name__ == '__main__':
        total_num_runs = 4
        import argparse
    
        ## try multi-cpu case
        parser = argparse.ArgumentParser()
        parser.add_argument('--cpu', type=int, default=2)
        parser.add_argument('--num_runs', type=int, default=total_num_runs)
        args = parser.parse_args()
    
        ## reset timing
        starttime = time.time()
        eg = ExperimentGrid(name='ppo-bench')
        eg.add('env_name', 'CartPole-v0', '', True)
        eg.add('seed', [10 * i for i in range(args.num_runs)])
        eg.add('epochs', 1)
        eg.add('steps_per_epoch', 200)
        eg.run(ppo, num_cpu=args.cpu)
        multi_cpu_time = time.time() - starttime
    
        ## try single-cpu case
        args.cpu = 1
    
        ## reset timing
        starttime = time.time()
        eg = ExperimentGrid(name='ppo-bench')
        eg.add('env_name', 'CartPole-v0', '', True)
        eg.add('seed', [10 * i for i in range(args.num_runs)])
        eg.add('epochs', 1)
        eg.add('steps_per_epoch', 200)
        eg.run(ppo, num_cpu=args.cpu)
        single_cpu_time = time.time() - starttime
    
        print('single-cpu',single_cpu_time,'multi-cpu',multi_cpu_time)
    

    Many thanks!

    opened by watchernyu 6
  • Do I need tensorflow 1.8?

    Do I need tensorflow 1.8?

    I am trying to install as specifies in the docs: git clone https://github.com/openai/spinningup.git cd spinningup pip install -e . and I am getting this error: ERROR: Could not find a version that satisfies the requirement tensorflow<2.0,>=1.8.0 (from spinup) (from versions: 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.3.4, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0rc0, 2.6.0rc1, 2.6.0rc2, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.7.0rc0, 2.7.0rc1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1, 2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0) ERROR: No matching distribution found for tensorflow<2.0,>=1.8.0

    I tried installing tensorflow but the only installation I could use is 2.2... Does that mean that I need to find a 1.8 tensorflow installation?

    Thank you

    opened by LinirZamir 0
  • Running spinningup in WSL2 with WSLg on Windows (Success)

    Running spinningup in WSL2 with WSLg on Windows (Success)

    Don't know if this has been detailed before, but I have gotten spinningup to work on WSL2 on windows without needing to install a X server for X11 forwarding. If you didn't know, WSL2 has WSLg which allows for you to run GUI apps in WSL.

    First make sure you have WSL2 enabled in your system, you can check by using: wsl -l -v in cmd/powershell. To upgrade to WSL2, issue the command wsl --set-version <distro name> 2 replacing <distro name> with the name of the distribution you want to upgrade.

    Next, install the vGPU driver for Ubuntu by following this link here. At the time of this post, the cuda version was 11.8, this will be important later on.

    Install miniconda by downloading the .sh from here. Personally, I use pyenv which you can check out here. If you run into any dependency issues creating your virtualenv you might need to find one with python=3.6 support.

    Follow spinningup for the rest of the installation.

    Here is where we need to potentially fix things. If you encounter no errors, then maybe the issues were patched out. However, when trying to watch the trained policy you might encounter this error:

    ImportError:
        Error occurred while running `from pyglet.gl import *`
        HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'.
        If you're running on a server, you may need a virtual frame buffer; something like this should work:
        'xvfb-run -s "-screen 0 1400x900x24" python <your_script.py>'
    

    To fix this issue, install an OpenGL library. The python-opengl library may not be present in your Ubuntu WSL distro, but you can install sudo apt install freeglut3-dev

    You might also encounter another error:

    libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
    libGL error: failed to load driver: swrast
    

    Most likely this is due to the anaconda/miniconda install having too old of an GLIBCXX binary (even if your system has a new one). We can check this by exporting this variable: export LIBGL_DEBUG=verbose and running the command again. If we see:

    libGL: MESA-LOADER: failed to open /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: /home/dev/.pyenv/versions/miniconda3-latest/envs/su/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-13.so.1)
    

    Then we can fix this by symlinking our systems GLIBCXX to conda distribution by executing:

    ln -s -f /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.30 $CONDA_PREFIX/lib/libstdc++.so.6
    

    If everything works you should see the training video when running the command again.

    opened by datvidwang 0
  • Documentation: Question about expected return

    Documentation: Question about expected return

    Hi,

    going through the RL Intro I stumbled something that is not yet clear to me. On https://spinningup.openai.com/en/latest/spinningup/rl_intro.html#the-rl-problem for the expected return it says .. math:: J(\pi) = \int_{\tau} P(\tau|\pi) R(\tau) = \underE{\tau\sim \pi}{R(\tau)} where I would have expected .. math:: J(\pi) = \int_{\tau} P(\tau|\pi) R(\tau) = \underE{\tau\sim P}{R(\tau)}

    My understanding is that \tau is a RV distributed with respect to P, and only the actions are taken from \pi, as later clearly differentiated on https://spinningup.openai.com/en/latest/spinningup/rl_intro.html#bellman-equations

    Please, can someone explain me why it says \tau\sim \pi?

    Thank you very much in advance! a

    opened by aflgit 1
  • rllib project got moved to another directory; references needs to be changed

    rllib project got moved to another directory; references needs to be changed

    opened by XRFXLP 0
  • Fixes wrong reference extra PG proof 1

    Fixes wrong reference extra PG proof 1

    Actually, the changed link references https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#id16 and not https://spinningup.openai.com/en/latest/spinningup/extra_pg_proof1.html

    opened by datajanko 0
Releases(0.2)
  • 0.2(Jan 30, 2020)

    Major changes:

    Spinning Up now has PyTorch implementations of VPG, PPO, DDPG, TD3, and SAC, in addition to the old Tensorflow versions.

    Examples and exercises have been updated to include PyTorch versions as well.

    The reward shift bug in the Tensorflow versions of VPG, TRPO, and PPO has been fixed.

    DDPG, TD3, and SAC Tensorflow versions were modified so that they now update every N steps instead of at the end of each trajectory. The PyTorch versions of these algorithms have the same behavior.

    Spinning Up's SAC has been updated to reflect the more-modern version of SAC that does not use a V-function. The tutorial page on SAC has been updated to describe the new version of SAC.

    The benchmark page has been updated with reruns for all algorithms on all environments, using the latest version of the code.

    Source code(tar.gz)
    Source code(zip)
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
Annotate with anyone, anywhere.

h h is the web app that serves most of the https://hypothes.is/ website, including the web annotations API at https://hypothes.is/api/. The Hypothesis

Hypothesis 2.6k Jan 8, 2023
PiRapGenerator - Make anyone rap the digits of pi

PiRapGenerator Make anyone rap the digits of pi (sample files are of Ted Nivison

null 7 Oct 2, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Aladdin Persson 4.7k Jan 8, 2023
Code for "Graph-Evolving Meta-Learning for Low-Resource Medical Dialogue Generation". [AAAI 2021]

Graph Evolving Meta-Learning for Low-resource Medical Dialogue Generation Code to be further cleaned... This repo contains the code of the following p

Shuai Lin 29 Nov 1, 2022
Meta Representation Transformation for Low-resource Cross-lingual Learning

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning This repo hosts the code for MetaXL, published at NAACL 2021. [Meta

Microsoft 36 Aug 17, 2022
Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques"

THESIS_CAIRONE_FIORENTINO Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques" GENERATE TOKE

cairone_fiorentino97 1 Dec 10, 2021
OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

OpenDILab 205 Dec 29, 2022
Punctuation Restoration using Transformer Models for High-and Low-Resource Languages

Punctuation Restoration using Transformer Models This repository contins official implementation of the paper Punctuation Restoration using Transforme

Tanvirul Alam 142 Jan 1, 2023
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all ?? Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 29, 2022
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

Ilya Kostrikov 3k Dec 31, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).

Google Research 367 Jan 9, 2023