A customisable 3D platform for agent-based AI research

Overview

DeepMind Lab

DeepMind Lab is a 3D learning environment based on id Software's Quake III Arena via ioquake3 and other open source software.

DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. Its primary purpose is to act as a testbed for research in artificial intelligence, especially deep reinforcement learning.

About

Disclaimer: This is not an official Google product.

If you use DeepMind Lab in your research and would like to cite the DeepMind Lab environment, we suggest you cite the DeepMind Lab paper.

You can reach us at [email protected].

Getting started on Linux

$ git clone https://github.com/deepmind/lab
$ cd lab

For a live example of a random agent, run

lab$ bazel run :python_random_agent --define graphics=sdl -- \
               --length=10000 --width=640 --height=480

Here is some more detailed build documentation, including how to install dependencies if you don't have them.

To enable compiler optimizations, pass the flag --compilation_mode=opt, or -c opt for short, to each bazel build, bazel test and bazel run command. The flag is omitted from the examples here for brevity, but it should be used for real training and evaluation where performance matters.

Play as a human

To test the game using human input controls, run

lab$ bazel run :game -- --level_script=tests/empty_room_test --level_setting=logToStdErr=true
# or:
lab$ bazel run :game -- -l tests/empty_room_test -s logToStdErr=true

Leave the logToStdErr setting off to disable most log output.

The values of observations that the environment exposes can be printed at every step by adding a flag --observation OBSERVATION_NAME for each observation of interest.

lab$ bazel run :game -- --level_script=lt_chasm --observation VEL.TRANS --observation VEL.ROT

Train an agent

DeepMind Lab ships with an example random agent in python/random_agent.py which can be used as a starting point for implementing a learning agent. To let this agent interact with DeepMind Lab for training, run

lab$ bazel run :python_random_agent

The Python API is used for agent-environment interactions. We also provide bindings to DeepMind's "dm_env" general API for reinforcement learning, as well as a way to build a self-contained PIP package; see the separate documentation for details.

DeepMind Lab ships with different levels implementing different tasks. These tasks can be configured using Lua scripts, as described in the Lua API.


Upstream sources

DeepMind Lab is built from the ioquake3 game engine, and it uses the tools q3map2 and bspc for map creation. Bug fixes and cleanups that originate with those projects are best fixed upstream and then merged into DeepMind Lab.

  • bspc is taken from github.com/TTimo/bspc, revision d9a372db3fb6163bc49ead41c76c801a3d14cf80. There are virtually no local modifications, although we integrate this code with the main ioq3 code and do not use their copy in the deps directory. We expect this code to be stable.

  • q3map2 is taken from github.com/TTimo/GtkRadiant, revision d3d00345c542c8d7cc74e2e8a577bdf76f79c701. A few minor local modifications add synchronization. We also expect this code to be stable.

  • ioquake3 is taken from github.com/ioquake/ioq3, revision 29db64070aa0bae49953bddbedbed5e317af48ba. The code contains extensive modifications and additions. We aim to merge upstream changes occasionally.

We are very grateful to the maintainers of these repositories for all their hard work on maintaining high-quality code bases.

External dependencies, prerequisites and porting notes

DeepMind Lab currently ships as source code only. It depends on a few external software libraries, which we ship in several different ways:

  • The zlib, glib, libxml2, jpeg and png libraries are referenced as external Bazel sources, and Bazel BUILD files are provided. The dependent code itself should be fairly portable, but the BUILD rules we ship are specific to Linux on x86. To build on a different platform you will most likely have to edit those BUILD files.

  • Message digest algorithms are included in this package (in //third_party/md), taken from the reference implementations of their respective RFCs. A "generic reinforcement learning API" is included in //third_party/rl_api, which has also been created by the DeepMind Lab authors. This code is portable.

  • EGL headers are included in this package (in //third_party/GL/{EGL,KHR}), taken from the Khronos OpenGL/OpenGL ES XML API Registry at www.khronos.org/registry/EGL. The headers have been modified slightly to remove the dependency of EGL on X.

  • Several additional libraries are required but are not shipped in any form; they must be present on your system:

    • SDL 2
    • gettext (required by glib)
    • OpenGL: A hardware driver and library are needed for hardware-accelerated human play. The headless library that machine learning agents will want to use can use either hardware-accelerated rendering via EGL or GLX or software rendering via OSMesa, depending on the --define headless=... build setting.
    • Python 2.7 (other versions might work, too) with NumPy, PIL (a few tests require a NumPy version of at least 1.8), or Python 3 (at least 3.5) with NumPy and Pillow.

The build rules are using a few compiler settings that are specific to GCC. If some flags are not recognized by your compiler (typically those would be specific warning suppressions), you may have to edit those flags. The warnings should be noisy but harmless.

Issues
  • Unable to build: Python include path missing

    Unable to build: Python include path missing

    Hi

    I'm on Ubuntu 16.04 and following the guidelines from https://github.com/deepmind/lab/blob/master/docs/build.md . I'm unable to run any of the examples

    Here are the major library versions:

    • Bazel version: 0.4.3
    • Lua: 5.1.5
    • Python: 2.7
    • OpenGL version: 4.5.0
    • GCC: 5.4.0

    Error while trying to run a random agent

    lab$ bazel run :game -- --level_script tests/demo_map --verbose_failures
    WARNING: Output base '/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b' is on NFS. This may lead to surprising failures and undetermined behavior.
    INFO: Found 1 target...
    ERROR: /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/external/jpeg_archive/BUILD:74:1: Executing genrule @jpeg_archive//:configure failed: linux-sandbox failed: error executing command /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/execroot/lab/_bin/linux-sandbox ... (remaining 5 argument(s) skipped).
    src/main/tools/linux-sandbox-pid1.cc:398: "remount(NULL, /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/905e0eef-0788-42e6-8852-7b444149d38c-13/tmp/home/ndg/projects, NULL, 2101281, NULL)": No such file or directory
    Target //:game failed to build
    Use --verbose_failures to see the command lines of failed build steps.
    INFO: Elapsed time: 2.569s, Critical Path: 1.20s
    ERROR: Build failed. Not running target.
    

    On trying to build the Python interface to DeepMind Lab with OpenGL

    lab$ bazel build :deepmind_lab.so --define headless=glx --verbose_failures
    WARNING: Output base '/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b' is on NFS. This may lead to surprising failures and undetermined behavior.
    INFO: Found 1 target...
    ERROR: /home/ml/hsatij/code/libs/lab/BUILD:972:1: C++ compilation of rule '//:dmlablib' failed: linux-sandbox failed: error executing command 
      (cd /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/execroot/lab && \
      exec env - \
      /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/execroot/lab/_bin/linux-sandbox @/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/linux-sandbox.params -- /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-canonical-system-headers -fno-omit-frame-pointer '-std=c++0x' -MD -MF bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.d '-frandom-seed=bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.o' -fPIC -iquote . -iquote bazel-out/local-fastbuild/genfiles -iquote external/bazel_tools -iquote bazel-out/local-fastbuild/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 '-DDMLAB_SO_LOCATION="libdmlab.so"' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c public/dmlab_so_loader.cc -o bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.o).
    src/main/tools/linux-sandbox-pid1.cc:398: "remount(NULL, /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/tmp/home/ndg/projects, NULL, 2101281, NULL)": No such file or directory
    Target //:deepmind_lab.so failed to build
    INFO: Elapsed time: 2.294s, Critical Path: 1.11s
    

    I'm new to bazel and unfortunately the error logs are too cryptic for me. Any help to resolve this will be appreciated !

    solved 
    opened by hercky 30
  • How to use lab as a python module

    How to use lab as a python module

    Hi,

    I built lab on Ubuntu 14.04. The python module tests all pass. I'm a little confused about importing deepmind_lab and running experiments with it in python. I apologize if this is due to my lack of general python/bazel knowledge. My understanding is that the process of experimentation with deepmind_lab is:

    1. create a python file experiment.py. import deepmind_lab and use it in the experiment.
    2. add a py_binary entry in the BUILD file for bazel named "experiment"
    3. perform bazel run :experiment

    Is this correct? And, is there any way to instead run experiments directly with python experiment.py?

    fixed-in-next-verison 
    opened by johnholl 22
  • Lab + python multiprocessing

    Lab + python multiprocessing

    I have implemented A3C with multiprocessing (+ pytorch) as opposed to using threads, however bazel run seems to break silently and clearly without any visible trace. This is what I do:

    $ bazel run :struct_runner --define headless=false
    [...]
    $ echo $?  # this is the error code of the previous process
    8
    

    struct_runner.py initialises a lab environment, then creates a bunch of processes in which more envs are created. In particular, the silent crash happens when I create a process p and do p.start() - it also appears to be non-deterministic with respect to the number of processes I manage to spawn before bazel kills them and quits.

    I know that @miyosuda has implemented A3C using threads here, however multiprocessing is supported very well by pytorch and it would be a shame to have to deal with threads management.

    opened by edran 15
  • Random agent compiling error

    Random agent compiling error

    Hey on Ubuntu 18.04, Bazel 0.20.0, Tensorflow 1.2 GPU I am trying to run the random agent and I am getting the following error. Can you please help me on how to solve this error? Thanks

    bazel run :python_random_agent --define graphics=sdl -- \

               --length=10000 --width=640 --height=480
    

    Starting local Bazel server and connecting to it... INFO: Invocation ID: 69a115a9-45b9-4642-87c8-8fc2170651ac INFO: SHA256 (https://github.com/abseil/abseil-cpp/archive/master.zip) = d3bb4e5578f06ddf3e0e21def6aabf3b4ae81d68b909f06cfcb727734584cab3 INFO: Analysed target //:python_random_agent (53 packages loaded, 3598 targets configured). INFO: Found 1 target... ERROR: /home/neuwong/lab/BUILD:790:1: C++ compilation of rule '//:game_lib_sdl' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -MD -MF bazel-out/k8-fastbuild/bin/_objs/game_lib_sdl/sdl_input.pic.d ... (remaining 103 argument(s) skipped)

    Use --sandbox_debug to see verbose messages from the sandbox engine/code/sdl/sdl_input.c:26:11: fatal error: SDL.h: No such file or directory

    include <SDL.h>

           ^~~~~~~
    

    compilation terminated. Target //:python_random_agent failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 33.266s, Critical Path: 7.03s INFO: 69 processes: 69 linux-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully

    opened by behroozmrd47 13
  • Pip and Python3

    Pip and Python3

    Informed by #32, #92 and #52 I tried to configure Bazel on a conda environment, with python 3. bazel build //:deepmind_lab.so appears to run successfully, however bazel run //:python_random_agent is not able to import deepmind_lab

    INFO: Running command line: bazel-out/k8-py3-fastbuild/bin/python_random_agent
    ImportError: numpy.core.multiarray failed to import
    Traceback (most recent call last):
      File "/home/florin/.cache/bazel/_bazel_florin/329274937dbc60c7c6c49b959689f873/execroot/org_deepmind_lab/bazel-out/k8-py3-fastbuild/bin/python_random_agent.runfiles/org_deepmind_lab/python/random_agent.py", line 26, in <module>
        import deepmind_lab
    ImportError: numpy.core.multiarray failed to import
    ERROR: Non-zero return code '1' from command: Process exited with status 1
    

    You can find the changes bellow. @tkoeppe Does this seem right? Also, are there any log files I can provide you with in order to help with the debugging?

    diff --git a/BUILD b/BUILD
    index 9e274c5..3972e07 100644
    --- a/BUILD
    +++ b/BUILD
    @@ -984,6 +984,7 @@ py_binary(
         data = [":deepmind_lab.so"],
         main = "python/random_agent.py",
         visibility = ["//python/tests:__subpackages__"],
    +    default_python_version = "PY3"
     )
     
     LOAD_TEST_SCRIPTS = [
    diff --git a/WORKSPACE b/WORKSPACE
    index fa8da47..967c712 100644
    --- a/WORKSPACE
    +++ b/WORKSPACE
    @@ -91,5 +91,5 @@ new_local_repository(
     new_local_repository(
         name = "python_system",
         build_file = "python.BUILD",
    -    path = "/usr",
    +    path = "/home/florin/Tools/miniconda3/envs/torch30",
     )
    diff --git a/python.BUILD b/python.BUILD
    index f0b3f9a..b6a6fa8 100644
    --- a/python.BUILD
    +++ b/python.BUILD
    @@ -5,7 +5,11 @@
     
     cc_library(
         name = "python",
    -    hdrs = glob(["include/python2.7/*.h"]),
    -    includes = ["include/python2.7"],
    +    hdrs = glob(["include/python3.6m/*.h",
    +                 "lib/python3.6/site-packages/numpy/core/include/**/*.h"
    +    ]),
    +    includes = ["include/python3.6m",
    +                "lib/python3.6/site-packages/numpy/core/include"
    +    ],
         visibility = ["//visibility:public"],
     )
    
    opened by floringogianu 13
  • Pip package not working with python 3

    Pip package not working with python 3

    Hi,

    I am having some problems when trying to build/install Deepmind Lab as a pip package with Python 3. Currently I am capable of generating the package .whl file (DeepMind_Lab-1.0-py3-none-any.whl) and installing in a conda environment, but I am getting the following error when importing the deepmind_lab module:

    deepmind_lab.so: undefined symbol: PyCObject_Type
    

    Any suggestions about how to fix this?

    Thanks,

    opened by camigord 13
  • I meet the error when bazel build -c opt //:deepmind_lab.so

    I meet the error when bazel build -c opt //:deepmind_lab.so

    bazel build -c opt //:deepmind_lab.so ERROR: /home/ubuntu/zz/projects/lab/WORKSPACE:146:1: //external:tree_archive: no such attribute 'repo_mapping' in 'http_archive' rule ERROR: Skipping '//:deepmind_lab.so': error loading package 'external': Package 'external' contains errors WARNING: Target pattern parsing failed. ERROR: error loading package 'external': Package 'external' contains errors

    opened by XqWang3 12
  • Sorry i don't know what i'm doing wrong

    Sorry i don't know what i'm doing wrong

    [email protected]:~$ cd lab [email protected]:~/lab$ sudo bazel run :game -- --level_script=tests/empty_room_test --level_setting=logToStdErr=true --sandbox_debug Extracting Bazel installation... Starting local Bazel server and connecting to it... INFO: SHA256 (https://github.com/abseil/abseil-cpp/archive/master.zip) = f63fa171a79bfd38f995b899989e74144255fcea57ad74079792385841db64dd DEBUG: Rule 'com_google_absl' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "f63fa171a79bfd38f995b899989e74144255fcea57ad74079792385841db64dd" INFO: Analysed target //:game (51 packages loaded, 3679 targets configured). INFO: Found 1 target... ERROR: /home/user/.cache/bazel/_bazel_root/851aceb1df3dfff730804dc58954a97b/external/glib_archive/BUILD.bazel:1:1: Executing genrule @glib_archive//:gen_configure failed (Exit 1) bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

    Use --sandbox_debug to see verbose messages from the sandbox configure: error: *** You must have either have gettext support in your C library, or use the *** GNU gettext library. (http://www.gnu.org/software/gettext/gettext.html)

    Target //:game failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 254.198s, Critical Path: 46.81s INFO: 154 processes: 154 processwrapper-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully

    opened by ZombiePm 10
  • Compilation Error Occured.

    Compilation Error Occured.

    I ran the command on ubuntu/trusty64. The error has occured.

    $ bazel run :random_agent --define headless=false -- --length=10000 --width=640 --height=480
    INFO: Found 1 target...
    ERROR: /home/vagrant/lab/BUILD:980:1: C++ compilation of rule '//:deepmind_lab.so' failed: linux-sandbox failed: error executing command /home/vagrant/.cache/bazel/_bazel_vagrant/c1d0c0e8255f05e00abeeb68d70c5ac4/execroot/lab/_bin/linux-sandbox ... (remaining 48 argument(s) skipped).
    python/dmlab_module.c:29:31: fatal error: numpy/arrayobject.h: No such file or directory
     #include "numpy/arrayobject.h"
                                   ^
    compilation terminated.
    WARNING: Cannot delete sandbox directory after action execution: /home/vagrant/.cache/bazel/_bazel_vagrant/c1d0c0e8255f05e00abeeb68d70c5ac4/bazel-sandbox/867732be-eacb-4fb1-b1ab-3e24f38afc93-13 (java.io.IOException: /home/vagrant/.cache/bazel/_bazel_vagrant/c1d0c0e8255f05e00abeeb68d70c5ac4/bazel-sandbox/867732be-eacb-4fb1-b1ab-3e24f38afc93-13/execroot/lab (Device or resource busy)).
    Target //:random_agent failed to build
    Use --verbose_failures to see the command lines of failed build steps.
    INFO: Elapsed time: 48.362s, Critical Path: 4.48s
    ERROR: Build failed. Not running target.
    
    

    I've already installed numpy on python2.7. Please help.

    opened by yama2akira 9
  • dm-lab in Keras

    dm-lab in Keras

    Heya,

    I'm currently re-implementing the vector-based navigation architecture of deep-mind's nature paper for tensorflow 2. However, it seems there is an issue regarding the environment of the python API when used with the Keras model API. In particular, I do something like:

    ...
    observations = ['RGB', 'DEBUG.POS.ROT', 'DEBUG.POS.TRANS']
    env = deepmind_lab.Lab(level, observations,
                               config=lab_config, renderer='hardware')
    env.reset()
    ...
        def generate_batch(env, batch_size=10):
            while True:
                ...
                obs = env.observations()
                ...
                x, y = ... some computation 
                env.reset()
                ...
                yield x, y
    
    
    model.fit_generator(
    generate_batch(env, batch_size=10),
    steps_per_epoch=1000,
    verbose=1,
    use_multiprocessing=False
    )
    
    

    depending whether i use env.reset() or not it sometimes drops error (related to: https://github.com/deepmind/lab/issues/134). Even if it does not explicitely complains, it stops always within the first batch. Since Tensorflow 2 seems to convert the generate_batch function into some eager-execution/autograph stuff, I am quite confused whether the env object gets either copied or something else nasty going on under the hood. I hope someone has more clue than I do :)

    The agent I am using is performing random actions (so it should not be the same state-sequence all the time, especially because the rnd seed is not statically set)

    opened by uahic 9
  • Failed to find function dmlab_connect in library!  RuntimeError: Failed to connect RL API

    Failed to find function dmlab_connect in library! RuntimeError: Failed to connect RL API

    hi,I used Git to clone episodic-curiosity and DeepMind Lablocally as required and tested DMlab with bazel run :python_random_agent command, but there is no problem and it can run. 2021-10-23 14-41-35屏幕截图 But when I apply our patch to DeepMind Lab:

    git checkout 7b851dcbf6171fa184bf8a25bf2c87fe6d3f5380 git checkout -b modified_dmlab git apply ../third_party/dmlab/dmlab_min_goal_distance.patch 2021-10-23 14-43-29屏幕截图 Error Failed to find function dmlab_connect in library! RuntimeError: Failed to connect RL API occurs when I run command bazel run :python_random_agent again. 2021-10-23 14-43-53屏幕截图 I have checked that the runfile directory is wrong but I don't know how to set it. Deepmind Lab. set Runfiles path(path) is also mentioned in the Python API My configuration environment: Ubuntu 16.04 Anaconda3 python3.6 I hope you can help me because it means a lot to me

    opened by o00000o 0
  • Reset() environment to a specific state?

    Reset() environment to a specific state?

    Since I am doing some research with DMlab, I was wondering if there are possible solutions to manually reset the environment to a specific state rather than the initial state?

    opened by Jthon-lab 2
  • RuntimeError: Failed to connect RL API

    RuntimeError: Failed to connect RL API

    I follow the steps in https://github.com/deepmind/lab/blob/master/docs/users/build.md
    This command runs without problem bazel run :python_random_agent --define graphics=sdl --
    --length=10000 --width=640 --height=480 bazel build -c opt //:deepmind_lab.so and bazel test -c opt //python/tests:python_module_test No problem But RuntimeError: Failed to connect RL API appears when I run bazel run -c opt //:python_random_agent 2021-10-21 09-51-11屏幕截图

    opened by o00000o 0
  • ImportError: deepmind_lab.so: undefined symbol: PyString_Type error occurred when I was running DeepMind Lab

    ImportError: deepmind_lab.so: undefined symbol: PyString_Type error occurred when I was running DeepMind Lab

    My configuration environment is ubuntu16.04 PY3.6 bazel as the latest version 2021-10-16 16-09-31屏幕截图

    opened by o00000o 5
  • RuntimeError: Failed to connect RL API

    RuntimeError: Failed to connect RL API

    hello, I am learning the project, when I run python episodic-curiosity/scripts/launcher_script.py --workdir=/tmp/ec_workdir --method=ppo_plus_ec --scenario=sparseplusdoors it shows: RuntimeError: Failed to connect RL API In call to configurable 'DMLabWrapper' () Failed to find function dmlab_connect in library! How to fix it?

    opened by Jingjinganhao 4
  • Make random maze as Capture The Flag mode

    Make random maze as Capture The Flag mode

    Hello,

    Currently, I am trying to implement paper 'Human-level performance in first-person multiplayer games with population-based deep reinforcement learning'.

    I can make a simple 1 vs 1 CFG game using the GtkRadiant and Lab.

    However, it seems like paper use a Maze generation method to make various environment. It is little hard to make it myself because of lack of information.

    Would you provide little hint about how to build a CFG game using Maze generation if it is possible?

    Thank you

    opened by kimbring2 2
  • How to convert the raw depth values to real values?

    How to convert the raw depth values to real values?

    I find the depth values are integers below 255, how do I convert them to the same scale as the agent coordinate?

    opened by student-petercai 2
  • how to activate rendering

    how to activate rendering

    i built file:///home/dewe/lab/bazel-bin/deepmind_lab.so file:///home/dewe/lab/bazel-bin/libdmlab_headless_hw.so file:///home/dewe/lab/bazel-bin/libdmlab_headless_sw.so with --define=osmesa_or_glx

    do i choose sdl if i want actual video rendering, just for debugging reasons.

    opened by ava6969 1
Releases(release-2020-12-07)
  • release-2020-12-07(Dec 7, 2020)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/memory_suite_01/explore_goal_locations_extrapolate
      2. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_extrapolate
      3. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_interpolate
      4. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_large
      5. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_small
      6. contributed/psychlab/memory_suite_01/explore_goal_locations_interpolate
      7. contributed/psychlab/memory_suite_01/explore_goal_locations_train_large
      8. contributed/psychlab/memory_suite_01/explore_goal_locations_train_small
    2. Language binding tasks.

      1. contributed/fast_mapping/fast_mapping
      2. contributed/fast_mapping/slow_mapping

    New Features:

    1. A property system has been added that allows dynamic querying and modifying of environment state. Level scripts can register and consume custom properties.
    2. A new Python module, dmenv_module, is provided that exposes the DeepMind dm_env API.

    Minor Improvements:

    1. Quake console commands can now be issued via a write-only property.
    2. New numeric "accumulate" operations for TensorView and the Lua Tensor types: sum, product, sum-of-squares, and dot product of two tensors.

    EnvCApi Changes:

    1. "Properties" have been added to the EnvCApi. Properties may be queried, set, and enumerated.
    2. The new API version is 1.4 (up from 1.3).
    3. The EnvCApi function fps is now deprecated; environments should instead use the new property system to communicate this information.

    Bug Fixes:

    1. Fix observation 'VEL.ROT' to allow non-zero values when combined with pixel observations. Previously, the presence of pixel observations caused the angular velocity information to be lost due to a logic error.
    Source code(tar.gz)
    Source code(zip)
  • release-2019-10-07(Oct 7, 2019)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/cued_temporal_production
      2. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_train
      3. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_holdout_interpolate
      4. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_holdout_extrapolate
      5. contributed/psychlab/memory_suite_01/change_detection_train
      6. contributed/psychlab/memory_suite_01/change_detection_holdout_interpolate
      7. contributed/psychlab/memory_suite_01/change_detection_holdout_extrapolate
      8. contributed/psychlab/memory_suite_01/continuous_recognition_train
      9. contributed/psychlab/memory_suite_01/continuous_recognition_holdout_interpolate
      10. contributed/psychlab/memory_suite_01/continuous_recognition_holdout_extrapolate
      11. contributed/psychlab/memory_suite_01/what_then_where_train
      12. contributed/psychlab/memory_suite_01/what_then_where_holdout_interpolate
      13. contributed/psychlab/ready_set_go
      14. contributed/psychlab/temporal_bisection
      15. contributed/psychlab/temporal_discrimination
      16. contributed/psychlab/visuospatial_suite/memory_guided_saccade
      17. contributed/psychlab/visuospatial_suite/odd_one_out
      18. contributed/psychlab/visuospatial_suite/pathfinder
      19. contributed/psychlab/visuospatial_suite/pursuit
      20. contributed/psychlab/visuospatial_suite/visual_match
      21. contributed/psychlab/visuospatial_suite/visually_guided_antisaccade
      22. contributed/psychlab/visuospatial_suite/visually_guided_prosaccade

    Minor Improvements:

    1. The game demo executable can now print observations at each step.

    EnvCApi Changes:

    1. The meaning of major and minor versions and the resulting notions of stability are clarified. The new API version is 1.3 (up from 1.2).
    2. The EnvCApi act function is now deprecated in favour of two finer-grained functions: A call to act should be replaced by a call act_discrete to set discrete actions, followed by an optional call to act_continuous to set continuous actions. (DeepMind Lab does not use continuous actions.)
    3. New support for "text actions", which can be set with the new act_text API function. (DeepMind Lab does not use text actions.)

    Bug Fixes:

    1. Observation 'DEBUG.CAMERA_INTERLEAVED.TOP_DOWN' is now correct for levels dmlab30/explore_object_rewards_{few,many}.

      An error is now raised if there is not enough space to place every possible room (regardless of whether the random generation actually produces a room of excessive size) and if a non-zero number of rooms was requested.

      The affected levels have been updated and will generate layouts similar to before, but the whole maze is offset by 100 units, and object placements will change.

    2. Fix top-down camera for language levels.

    3. Correct typo in bot Leonis, skill level 1, based on OpenArena's bot code gargoyle_c.c.

    4. Tensor scalar operations using arrays now work similar to the way they do with single values.

    Source code(tar.gz)
    Source code(zip)
  • release-2019-02-04(Feb 4, 2019)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/harlow

    Minor Improvements:

    1. Improve documentation of how to configure non-hermetic dependencies (Lua, Python, NumPy).
    2. Add 'allowHoldOutLevels' setting to allow running of levels that should not be trained on, but held out for evaluation.
    3. Add logging library 'common.log', which provides the ability to control which log messages are emitted via the setting 'logLevel'.
    4. Update the ioq3 upstream code to the latest state.
    5. Lua 5.1 is now downloaded and built from source, and is thus no longer a required local dependency.
    6. A minimal version of the "realpath" utility is now bundled with the code, and thus "realpath" is no longer a required local dependency.

    Bug Fixes:

    1. Prevent missing sounds from causing clients to disconnect.
    2. Fix a bug in the call of the theme callback 'placeFloorModels', which had caused an "'index' is missing" error during compilation of text levels with texture sets that use floor models, such as MINESWEEPER, GO, and PACMAN.
    3. Fix bug where levels 'keys_doors_medium', 'keys_doors_random' and 'rooms_keys_doors_puzzle' would not accept the common 'logLevel' setting.
    4. Expose a 'demofiles' command line flag for the Python random agent, without which the agent was not able to record or play back demos.
    5. Fix a memory deallocation order error introduced by an earlier commit.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-06-20(Jun 20, 2018)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/glass_pattern_detection
      2. contributed/psychlab/landoltC_identification
      3. contributed/psychlab/motion_discrimination{,_easy}
      4. contributed/psychlab/multiple_object_tracking{,_easy}
      5. contributed/psychlab/odd_one_out

    Bug Fixes:

    1. Let Python level cache set to None mean the same as not setting it at all.
    2. Change Python module initialization in Python-3 mode to make PIP packages work in Python 3.

    Minor Improvements:

    1. Add support for absl::variant to lua::Push and lua::Read.
    2. The demo :game has a new flag --start_index to start at an episode index other than 0.
    3. Add a console command dm_pickup to pick up an item identified by its id.
    4. More Python demos and tests now work with Python 3.
    5. Add a shader for rendering decals with transparency.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-05-15(May 15, 2018)

    New Levels:

    1. DMLab-30.

      1. contributed/dmlab30/psychlab_arbitrary_visuomotor_mapping
      2. contributed/dmlab30/psychlab_continuous_recognition
    2. Psychlab.

      1. contributed/psychlab/arbitrary_visuomotor_mapping
      2. contributed/psychlab/continuous_recognition

    New Features:

    1. Support for level caching for improved performance in the Python module.
    2. Add the ability to spawn pickups dynamically at arbitrary locations.
    3. Add implementations to read datasets including Cifar10 and Stimuli.
    4. Add the ability to specify custom actions via 'customDiscreteActionSpec' and 'customDiscreteAction' callbacks.

    Bug Fixes:

    1. Fix playerId and otherPlayerId out by one errors in 'game_rewards.lua'.
    2. Require playerId passed to game:addScore to be one indexed instead of zero indexed and allow game:addScore to be used without a playerId.
    3. game:renderCustomView now renders the view with top-left as the origin. The previous behaviour can be achieved by calling reverse(1) on the returned tensor.
    4. Fix a bug in image.scale whereby the offset into the data was erroneously ignored.
    5. Fix a typo in a require statement in visual_search_factory.lua.
    6. Fix a few erroneous dependencies on Lua dictionary iteration order.
    7. game:AddScore now works even on the final frame of an episode.

    Minor Improvements:

    1. Moved .map files into assets/maps/src and .bsp files into assets/maps/built. Added further pre-built maps, which removes the need for the expensive :map_assets build step.
    2. Allow game to be rendered with top-left as origin instead of bottom-left.
    3. Add 'mixerSeed' setting to change behaviour of all random number generators.
    4. Support for BGR_INTERLEAVED and BGRD_INTERLEAVED observation formats.
    5. Add a Lua API to load PNGs from file contents.
    6. Add 'eyePos' to playerInfo() for a more accurate eye position of player. Used in place of player pos + height.
    7. Add support for absl::string_view to lua::Push and lua::Read.
    8. Allow player model to be overridden via 'playerModel' callback.
    9. Add ability to specify custom actions via 'customDiscreteActionSpec' and 'customDiscreteAction' callbacks.
    10. Add game:console command to issue Quake 3 console commands directly.
    11. Add clamp to tensor operations.
    12. Add new callback api:newClientInfo, allowing each client to intercept when players are loading.
    13. Skymaze level generation is now restricted to produce only 100000 distinct levels. This allows for caching to avoid expensive recompilations.
    14. Add cvars 'cg_drawScriptRectanglesAlways' and 'cg_drawScriptTextAlways' to enable script rendering when reducedUI or minimalUI is enabled.
    15. All pickup types can now choose their movement type separately, and in particular, all pickup types can be made static. Two separate table entries are now specified for an item, 'typeTag' and 'moveType'.

    Deprecated Features:

    1. Observation format names RGB_INTERLEAVED and RGBD_INTERLEAVED replace RGB_INTERLACED and RGBD_INTERLACED, respectively. The old format names are deprecated and will be removed in a future release.
    2. The pickup item's tag member is now called moveType. The old name is deprecated and will be removed in a future release.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-02-07(Feb 7, 2018)

    New Levels:

    1. DMLab-30.

      1. contributed/dmlab30/rooms_collect_good_objects_{test,train}
      2. contributed/dmlab30/rooms_exploit_deferred_effects_{test,train}
      3. contributed/dmlab30/rooms_select_nonmatching_object
      4. contributed/dmlab30/rooms_watermaze
      5. contributed/dmlab30/rooms_keys_doors_puzzle
      6. contributed/dmlab30/language_select_described_object
      7. contributed/dmlab30/language_select_located_object
      8. contributed/dmlab30/language_execute_random_task
      9. contributed/dmlab30/language_answer_quantitative_question
      10. contributed/dmlab30/lasertag_one_opponent_small
      11. contributed/dmlab30/lasertag_three_opponents_small
      12. contributed/dmlab30/lasertag_one_opponent_large
      13. contributed/dmlab30/lasertag_three_opponents_large
      14. contributed/dmlab30/natlab_fixed_large_map
      15. contributed/dmlab30/natlab_varying_map_regrowth
      16. contributed/dmlab30/natlab_varying_map_randomized
      17. contributed/dmlab30/skymaze_irreversible_path_hard
      18. contributed/dmlab30/skymaze_irreversible_path_varied
      19. contributed/dmlab30/psychlab_sequential_comparison
      20. contributed/dmlab30/psychlab_visual_search
      21. contributed/dmlab30/explore_object_locations_small
      22. contributed/dmlab30/explore_object_locations_large
      23. contributed/dmlab30/explore_obstructed_goals_small
      24. contributed/dmlab30/explore_obstructed_goals_large
      25. contributed/dmlab30/explore_goal_locations_small
      26. contributed/dmlab30/explore_goal_locations_large
      27. contributed/dmlab30/explore_object_rewards_few
      28. contributed/dmlab30/explore_object_rewards_many

    New Features:

    1. Basic support for demo recording and playback.

    Minor Improvements:

    1. Add a mechanism to build DeepMind Lab as a PIP package.
    2. Extend basic testing to all levels under game_scripts/levels.
    3. Add settings minimalUI and reducedUI to avoid rendering parts of the HUD.
    4. Add teleported flag to game:playerInfo() to tell whether a player has teleported that frame.
    5. Add Lua functions countEntities and countVariations to the maze generation API to count the number of occurrences of a specific entity or variation, respectively

    Bug Fixes:

    1. Fix out-of-bounds access in Lua 'image' library.
    2. Fix off-by-one error in renderergl1 grid mesh rendering.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-01-26(Jan 26, 2018)

    New Levels:

    1. Psychlab, a platform for implementing classical experimental paradigms from cognitive psychology.

      1. contributed/psychlab/sequential_comparison
      2. contributed/psychlab/visual_search

    New Features:

    1. Extend functionality of the built-in tensor Lua library.
    2. Add built-in image Lua library for loading and scaling PNGs.
    3. Add error handling to the env_c_api (version 1.1).
    4. Add ability to create events from Lua scripts.
    5. Add ability to retrieve game entity from Lua scripts.
    6. Add ability create pickup models during level load.
    7. Add ability to update textures from script after the level has loaded.
    8. Add Lua customisable themes. Note: This change renames helpers in maze_generation to be in lowerCamelCase (e.g. MazeGeneration -> mazeGeneration).
    9. The directory game_scripts has moved out of the assets directory, and level scripts now live separately from the library code in the levels subdirectory.

    Minor Improvements:

    1. Remove unnecessary dependency of map assets on Lua scripts, preventing time-consuming rebuilding of maps when scripts are modified.
    2. Add ability to disable bobbing of reward and goal pickups.
    3. The setting controls (with values internal, external) has been renamed to nativeApp (with values true, false, respectively). When set to true, programs linked against game_lib_sdl will use the native SDL input devices.
    4. Change LuaSnippetEmitter methods to use table call conventions.
    5. Add config variable for monochromatic lightmaps ('r_monolightmaps'). Enabled by default.
    6. Add config variable to limit texture size ('r_textureMaxSize').
    7. api:modifyTexture must now return whether the texture was modified.
    8. Add ability to adjust rewards.
    9. Add ability to raycast between different points on the map.
    10. Add ability to test whether a view vector is within an angle range within a oriented view frame.

    Bug Fixes:

    1. Increase current score storage from short to long.
    2. Fix ramp jump velocity in level lt_space_bounce_hard.
    3. Fix Lua function 'addScore' from module 'dmlab.system.game' to allow negative scores added to a player.
    4. Remove some undefined behaviour in the engine.
    5. Reduce inaccuracies related to angle conversion and normalization.
    6. Behavior of team spawn points now matches that of player spawn points. 'randomAngleRange' spawnVar must be set to 0 to match previous behavior.
    Source code(tar.gz)
    Source code(zip)
  • release-2016-12-06(Dec 8, 2016)

Owner
DeepMind
DeepMind
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.1k Dec 3, 2021
This is the official implementation of Multi-Agent PPO.

MAPPO Chao Yu*, Akash Velu*, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. Website: https://sites.google.com/view/mappo This repository implem

null 309 Dec 3, 2021
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

MARL Tricks Our codes for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implemented and standardiz

null 352 Dec 4, 2021
A general-purpose multi-agent training framework.

MALib A general-purpose multi-agent training framework. Installation step1: build environment conda create -n malib python==3.7 -y conda activate mali

MARL @ SJTU 230 Nov 23, 2021
A toolkit for reproducible reinforcement learning research.

garage garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementa

Reinforcement Learning Working Group 1.3k Nov 29, 2021
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.

Dopamine Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. It aims to fill the need for a small, easily grok

Google 9.7k Nov 27, 2021
A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)

Applied Reinforcement Learning @ Facebook Overview ReAgent is an open source end-to-end platform for applied reinforcement learning (RL) developed and

Facebook Research 3.1k Dec 2, 2021
Paddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.

Paddle-RLBooks Welcome to Paddle-RLBooks which is a reinforcement learning code study guide based on pure PaddlePaddle. 欢迎来到Paddle-RLBooks,该仓库主要是针对强化学

AgentMaker 110 Dec 2, 2021
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 24.8k Dec 2, 2021
django-dashing is a customisable, modular dashboard application framework for Django to visualize interesting data about your project. Inspired in the dashboard framework Dashing

django-dashing django-dashing is a customisable, modular dashboard application framework for Django to visualize interesting data about your project.

talPor Solutions 675 Nov 28, 2021
Customisable pharmacokinetic model accessible via bash CLI allowing for variable dose calculations as well as intravenous and subcutaneous administration calculations

Pharmacokinetic Modelling Group Project A PharmacoKinetic (PK) modelling function for analysis of injected solute dynamics over time, developed by Gro

null 1 Oct 24, 2021
a cool, easily usable and customisable subdomains scanner

Subdah ?? another subdomains scanner. Installation ⚠️ Python 3.10 required ⚠️ $ git clone https://github.com/traumatism/subdah $ cd subdah $ pip3 inst

toast 8 Nov 25, 2021
A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squares.

W.I.P-Aim-Memory-Game A customisable game where you have to quickly click on black tiles in order of appearance while avoiding clicking on white squar

dE_soot 1 Nov 28, 2021
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Nov 26, 2021
Medusa is a cross-platform agent compatible with both Python 3.8 and Python 2.7.

Medusa Medusa is a cross-platform agent compatible with both Python 3.8 and Python 2.7. Installation To install Medusa, you'll need Mythic installed o

Mythic Agents 92 Nov 29, 2021
Doom-based AI Research Platform for Reinforcement Learning from Raw Visual Information. :godmode:

ViZDoom ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is primarily intended for research

Marek Wydmuch 1.3k Nov 24, 2021
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 4.7k Dec 2, 2021
A telegram bot providing recon and research functions for bug bounty research

Bug Bounty Bot A telegram bot with commands to simplify bug bounty tasks Installation Use Road Map Installation BugBountyBot is open-source so you can

Tyler Butler 1 Oct 23, 2021
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 4.7k Dec 2, 2021
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 8 Nov 12, 2021
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 110 Dec 2, 2021
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 135 Nov 19, 2021
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 638 Nov 24, 2021
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.2k Nov 27, 2021
Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

Clinica Software platform for clinical neuroimaging studies Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL About The Proj

ARAMIS Lab 113 Dec 1, 2021
Cross-platform desktop synchronization client for the Nuxeo platform.

Nuxeo Drive Desktop Synchronization Client for Nuxeo This is an ongoing development project for desktop synchronization of local folders with remote N

Nuxeo 59 Nov 26, 2021
Phone Number formatting for PlaySMS Platform - BulkSMS Platform

BulkSMS-Number-Formatting Phone Number formatting for PlaySMS Platform - BulkSMS Platform. Phone Number Formatting for PlaySMS Phonebook Service This

Edwin Senunyeme 1 Nov 8, 2021