Godot RL Agents is a fully Open Source packages that allows video game creators

Overview

Godot RL Agents

The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to learn complex behaviors for their Non Player Characters or agents. This repository provides:

  • An interface between games created in Godot and Machine Learning algorithms running in Python
  • Access to 21 state of the art Machine Learning algorithms, provided by the Ray RLLib framework.
  • Support for memory-based agents, with LSTM or attention based interfaces
  • Support for 2D and 3D games
  • A suite of AI sensors to augment your agent's capacity to observe the game world
  • Godot and Godot RL agents are completely free and open source under the very permissive MIT license. No strings attached, no royalties, nothing.
godot_rl_agents_trailer_v01_20211008.mp4

Contents

  1. Motivation
  2. Citing Godot RL Agents
  3. Installation
  4. Examples
  5. Documentation
  6. Roadmap
  7. FAQ
  8. Licence
  9. Acknowledgments
  10. References

Motivation

Over the next decade advances in AI algorithms, notably in the fields of Machine Learning and Deep Reinforcement Learning, are primed to revolutionize the Video Game industry. Customizable enemies, worlds and story telling will lead to diverse gameplay experiences and new genres of games. Currently the field is dominated by large organizations and pay to use engines that have the budget to create such AI enhanced agents. The objective of the Godot RL Agents package is to lower the bar of accessability so that game developers can take their idea from creation to publication end-to-end with an open source and free package.

Citing Godot RL Agents

@misc{beeching2021godotrlagents,
  author = {Edward Beeching},
  title = {Godot RL agents},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/edbeeching/godot_rl_agents}},
}

Installation

Please follow the installation instructions to install Godot RL agents.

Examples

We provide several reference implementations and instructions to implement your own environment, please refer to the Examples documentation.

Creating custom environments

Once you have studied the example environments, you can follow the instructions in Custom environments in order to make your own.

Roadmap

We have number features that will soon be available in versions 0.2.0 and 0.3.0. Refer to the Roadmap for more information.

FAQ

  1. Why have we developed Godot RL Agents? The objectives of the framework are to:
  • Provide a free and open source tool for Deep RL research and game development.
  • Enable game creators to imbue their non-player characters with unique * behaviors.
  • Allow for automated gameplay testing through interaction with an RL agent.
  1. How can I contribute to Godot RL Agents? Please try it out, find bugs and either raise an issue or if you fix them yourself, submit a pull request.
  2. When will you be providing Mac support? I would like to provide this ASAP but I do not own a mac so I cannot perform any manual testing of the codebase.
  3. Can you help with my game project? If the game example do not provide enough information, reach out to us on github and we may be able to provide some advice.
  4. How similar is this tool to Unity ML agents? We are inspired by the the Unity ML agents toolkit and make no effort to hide it.

Licence

Godot RL Agents is MIT licensed. See the LICENSE file for details.

"Cartoon Plane" (https://skfb.ly/UOLT) by antonmoek is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

Acknowledgments

We thank the authors of the Godot Engine for providing such a powerful and flexible game engine for AI agent development. We thank the developers at Ray and Stable baselines for creating easy to use and powerful RL training frameworks. We thank the creators of the Unity ML Agents Toolkit, which inspired us to create this work.

References

Comments
  • How do I use rllib for the examples provided?

    How do I use rllib for the examples provided?

    SO, I found out that sample-factory is not supported on Windows OS. And rllib is the only backend that successfully installed on my pc. So, how can I use rllib to run the examples provided and make my own RL environments with it.

    opened by ryash072007 13
  • Unable to install RL agents.

    Unable to install RL agents.

    It says package not found:

    (base) PS C:\Users\Jetpackjules\Downloads\godot_rl_agents-0.2.2> conda env create Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • libffi=3.3
    • libunistring=0.9.10
    • libopus=1.3.1
    • libtasn1=4.16.0
    • openh264=2.1.1
    • x264=1!157.20191217
    • libidn2=2.3.2
    • libvpx=1.7.0
    • _openmp_mutex=4.5
    • lame=3.100
    • ncurses=6.3
    • gmp=6.2.1
    • freetype=2.11.0
    • gnutls=3.6.15
    • readline=8.1.2
    • nettle=3.7.3
    • libgcc-ng=9.3.0
    • libgomp=9.3.0
    • libstdcxx-ng=9.3.0
    • ld_impl_linux-64=2.35.1
    opened by Jetpackjules11 7
  • Installation Help

    Installation Help

    I am a complete novice to github and conda and I am having trouble installing (likely user error). Looking for specific help or general guidance on where to go for help. I am on Windows. Seems solving environment fails, maybe has to do with linux-64 line or prefix at bottom of .ym file being to an unkown directory. Thanks in advance for any advice.

    Installed the full anaconda so I could use the Navigator Opened powershell prompt cd to the directory with the godot_rl_agents folder and enviroment.ym; ran "conda env create" output "Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • ld_impl_linux-64=2.35.1"
    opened by Quantemplation 4
  • Solving environment: failed  ResolvePackageNotFound when creating environment in Windows

    Solving environment: failed ResolvePackageNotFound when creating environment in Windows

    Hello Ed!

    I've tried following the install instructions for Windows but I get the following error:

    (base) PS F:\Repos\godot_rl_agents> conda env create
    Collecting package metadata (repodata.json): done
    Solving environment: failed
    
    ResolvePackageNotFound:
      - zstd==1.4.9=haebb681_0
      - openssl==1.1.1m=h7f8727e_0
      - cudatoolkit==11.3.1=h2bc3f7f_2
      - _openmp_mutex==4.5=1_gnu
      - jpeg==9d=h7f8727e_0
      - freetype==2.11.0=h70c0345_0
      - libstdcxx-ng==9.3.0=hd4cf53a_17
      - ca-certificates==2022.2.1=h06a4308_0
      - lz4-c==1.9.3=h295c915_1
      - nettle==3.7.3=hbbd107a_1
      - mkl_fft==1.3.1=py38hd3c417c_0
      - lame==3.100=h7b6447c_0
      - bzip2==1.0.8=h7b6447c_0
      - gnutls==3.6.15=he1e5248_0
      - ld_impl_linux-64==2.35.1=h7274673_9
      - libgomp==9.3.0=h5101ec6_17
      - openh264==2.1.1=h4ff587b_0
      - pytorch==1.11.0=py3.8_cuda11.3_cudnn8.2.0_0
      - certifi==2021.10.8=py38h06a4308_2
      - x264==1!157.20191217=h7b6447c_0
      - libwebp-base==1.2.2=h7f8727e_0
      - ncurses==6.3=h7f8727e_2
      - pillow==9.0.1=py38h22f2fdc_0
      - cryptography==36.0.0=py38h9ce1e76_0
      - mkl-service==2.4.0=py38h7f8727e_0
      - lcms2==2.12=h3be6417_0
      - libuv==1.40.0=h7b6447c_0
      - gmp==6.2.1=h2531618_2
      - tk==8.6.11=h1ccaba5_0
      - python==3.8.12=h12debd9_0
      - libvpx==1.7.0=h439df22_0
      - numpy==1.21.2=py38h20f2e39_0
      - mkl_random==1.2.2=py38h51133e4_0
      - libunistring==0.9.10=h27cfd23_0
      - pip==21.2.4=py38h06a4308_0
      - mkl==2021.4.0=h06a4308_640
      - xz==5.2.5=h7b6447c_0
      - intel-openmp==2021.4.0=h06a4308_3561
      - ffmpeg==4.2.2=h20bf706_0
      - libtasn1==4.16.0=h27cfd23_0
      - numpy-base==1.21.2=py38h79a1101_0
      - brotlipy==0.7.0=py38h27cfd23_1003
      - libopus==1.3.1=h7b6447c_0
      - libtiff==4.2.0=h85742a9_0
      - libwebp==1.2.2=h55f646e_0
      - libffi==3.3=he6710b0_2
      - libgcc-ng==9.3.0=h5101ec6_17
      - libidn2==2.3.2=h7f8727e_0
      - setuptools==58.0.4=py38h06a4308_0
      - pysocks==1.7.1=py38h06a4308_0
      - zlib==1.2.11=h7f8727e_4
      - sqlite==3.38.0=hc218d9a_0
      - giflib==5.2.1=h7b6447c_0
      - readline==8.1.2=h7f8727e_1
      - libpng==1.6.37=hbc83047_0
      - cffi==1.15.0=py38hd667e15_1
    

    It seems like conda is unable to find those packages on Windows. I think it's due to the build numbers (ex zstd==1.4.9=haebb681_0) referencing a build for a different platform. I've created a new environment specification where I've removed them with conda env export -n gdrl_conda -f .\environment.yml --no-builds and was able to create the environment with the original command conda env create.

    opened by PhilippeMarcotte 4
  • People who want to use SF in windows, read this:

    People who want to use SF in windows, read this:

    For people who want to use SF in windows OS because of its features, I recommend WSL. Ill update this issue with my progress and possible problems you may face trying to get WSL and/or get SF in it.

    opened by ryash072007 3
  • Training stuck in

    Training stuck in "PENDING" status and editor not connecting

    I followed the installation instructions provided, everything goes well, but couldn't train nor use the pretrained models from any of the example envs. First of all when I use the following command:

    gdrl --env_path envs/builds/JumperHard/jumper_hard.x86_64 --config_path envs/configs/ppo_config_jumper_hard.yaml

    It says

    usage: gdrl [-h] [--env_path ENV_PATH] [-f CONFIG_FILE] [-c RESTORE] [-e] gdrl: error: unrecognized arguments: --config_path envs/configs/ppo_config_jumper_hard.yaml

    So I just changed the argument --config_path to -f and now it works, but...

    == Status == Memory usage on this node: 6.1/15.5 GiB Using FIFO scheduling algorithm. Resources requested: 0/4 CPUs, 0/0 GPUs, 0.0/7.38 GiB heap, 0.0/3.69 GiB objects Result logdir: /home/hibiscus-tea/ray_results/PPO/jumper_hard Number of trials: 1/1 (1 PENDING) +-----------------------+----------+-------+ | Trial name | status | loc | |-----------------------+----------+-------| | PPO_godot_0479d_00000 | PENDING | | +-----------------------+----------+-------+

    It stays like that forever. Neither running jumper_hard.x86_64 or running the game from the editor changes anything. If I use the pretrained model command it stays the same. I tried the same process on Windows 10 and I get the same results. I think I am missing something. The editor outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 03

    If I change the const DEFAULT_PORT to 6007 (the default godot port) it outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 02 performing handshake server disconnected, closing

    I hope you help me with this issue. This project looks amazing and I am looking forward to the multi-agents update. :)

    opened by AleryBerry 3
  • TypeError: '>=' not supported between instances of 'list' and 'int'

    TypeError: '>=' not supported between instances of 'list' and 'int'

    Traceback (most recent call last): File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\Scripts\gdrl.exe_main.py", line 7, in File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\main.py", line 108, in main training_function(args, extras) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 78, in stable_baselines_training env = StableBaselinesGodotEnv() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 12, in init self.env = GodotEnv(port=port, seed=seed) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 44, in init self._get_env_info() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 235, in _get_env_info observation_spaces[k] = spaces.Discrete(v["size"]) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\gym\spaces\discrete.py", line 15, in init assert n >= 0 TypeError: '>=' not supported between instances of 'list' and 'int'

    opened by ryash072007 2
  • Installation Problems

    Installation Problems

    Hi there,

    I am currently looking into your project and it looks super interesting.

    Unfortunately I have troubles installing the environment on windows. The first errors occur when running the instruction conda env create from the installation guide. See Screenshot: Screenshot 2022-10-23 112009

    Could it be that you are using packages for linux only? _openmp_mutex=4.5 seems to be one of them. Is there a way to get this project running on windows? Would be cool, because I am consider using it for my master thesis.

    Cheers!

    opened by visuallization 2
  • Reward always displayed as nan

    Reward always displayed as nan

    Hello,

    I am having another issue, the rewards are always displayed as nan in the console, like this:

    == Status ==
    Current time: 2022-06-21 15:40:17 (running for 00:04:32.32)
    Memory usage on this node: 14.3/31.3 GiB
    Using FIFO scheduling algorithm.
    Resources requested: 2.0/16 CPUs, 1.0/1 GPUs, 0.0/13.01 GiB heap, 0.0/6.5 GiB objects (0.0/1.0 accelerator_type:G)
    Result logdir: /home/ls11det/ray_results/PPO/editor
    Number of trials: 1/1 (1 RUNNING)
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    | Trial name            | status   | loc                   |   iter |   total time (s) |   ts |   reward |   episode_reward_max |   episode_reward_min |   episode_len_mean |
    |-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------|
    | PPO_godot_0dbb4_00000 | RUNNING  | 129.217.38.190:865027 |      3 |          208.046 | 3072 |      nan |                  nan |                  nan |                nan |
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    

    I even tried just giving back a number as reward to see if any of my code was causing the issue, but it is still displayed as nan:

    func get_reward():
    	# What behavior do you want to reward, kills? penalties for death, key waypoints
    	return 0.5
    

    I also printed in the sync.gd script where it collects and sends the reward and it picks up the 0.5 correctly. Is there anything I am missing here?

    opened by themars2011 2
  • BallChase example: Does best_fruit_distance need a reset after collection?

    BallChase example: Does best_fruit_distance need a reset after collection?

    I am not sure if I understand the examples correctly. In the BallChase example best_fruit_distance is initialized and reset in the reset() method. But shouldn't it also be reset after every fruit collection? Only the distance reduction to the first fruit gets rewarded at the moment.

    bug 
    opened by mischkadb 2
  • Errors with default config: KeyError

    Errors with default config: KeyError "observation_space"

    Hi, I just installed godot_rl_agents as described in the installation instructions. I have been trying to train an agent for one of the default envs but I get the following error

    (pid=38965) KeyError: 'observation_space'
    (pid=38965) SCRIPT ERROR: handle_message: Invalid get index 'type' (on base: 'Nil').
    (pid=38965)    At: res://addons/godot_rl_agents/sync.gdc:172.
    Traceback (most recent call last):
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/bin/gdrl", line 33, in <module>
        sys.exit(load_entry_point('godot-rl-agents', 'console_scripts', 'gdrl')())
      File "/home/ashutosh/HDD/MachineLearning/godot_rl_agents/godot_rl_agents/core/main.py", line 91, in main
        results = tune.run(
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/lib/python3.8/site-packages/ray/tune/tune.py", line 555, in run
        raise TuneError("Trials did not complete", incomplete_trials)
    

    I also manually tried printing json_dict and here are the contents:

    {'algorithm': 'PPO', 'stop': {'episode_reward_mean': 5000, 'training_iteration': 1000, 'timesteps_total': 200000000}, 'config': {'env': 'godot', 'env_config': {'framerate': None, 'action_repeat': None, 'show_window': False, 'seed': 0, 'env_path': 'envs/builds/BallChase/ball_chase.x86_64'}, 'framework': 'torch', 'lambda': 0.95, 'gamma': 0.95, 'vf_clip_param': 100.0, 'clip_param': 0.2, 'entropy_coeff': 0.001, 'entropy_coeff_schedule': None, 'train_batch_size': 1024, 'sgd_minibatch_size': 128, 'num_sgd_iter': 16, 'num_workers': 4, 'lr': 0.0003, 'num_envs_per_worker': 16, 'batch_mode': 'truncate_episodes', 'rollout_fragment_length': 32, 'num_gpus': 1, 'model': {'fcnet_hiddens': [256, 256], 'num_framestacks': 4}, 'no_done_at_end': True, 'soft_horizon': True}}
    

    Here's the full log : https://www.toptal.com/developers/hastebin/epovenonow.yaml

    Do I absolutely need to keep Godot editor open ? I'm currently using the ball_chase.x86_64 from the repo

    Lastly, opening an environment in godot opens with 16 agents together. Is there a way to fix this ?

    opened by ashutoshbsathe 2
  • Unable to open any example in the godot editor

    Unable to open any example in the godot editor

    I just get a message that says "the following file does not specify the version of godot with which it was created. If you proceed with opening it, it will be configured for godot's file format" and when I force open the project immediatly closes. (this means I can't run "gdrl.interactive")

    I also noticed that ryash072007 managed to get sb3 working to some extent, and would greatly appreciate any advice on how to accomplish that.

    (I am using Anaconda Powershell prompt and Godot 3.5.1)

    opened by Jetpackjules11 4
  • What may be happening if Godot freezes when performing handshake?

    What may be happening if Godot freezes when performing handshake?

    I'm using a Linux VM to run the sf part of the training and am using port-forwarding to allow it to communicate to my host computer. However, while performing handshake, the game just gets stuck. I have tried debugging this but nothing worked. Do you know what may be happening?

    opened by ryash072007 4
  • Export model to ONNX

    Export model to ONNX

    this is a suggestion/request in which I want to contribute, I have started work on this feature (which I have committed to my fork), but I am not well versed on Torch code, though I have gotten to the point where the model gets loaded from the checkpoint, I get an error saying I need to pass a Tensor of shape [...,8] to the torch.onnx.export function

    opened by yaelatletl 6
  • Using TorchSharp in Godot

    Using TorchSharp in Godot

    Hi, Ed! I have a problem with using TorchSharp nuget lib in Godot C# version. Every time I try to use it in godot I get the error like:

    System.DllNotFoundException: LibTorchSharp assembly: unknown assembly type: unknown type member:

    But the same code can work in a regular console project without godot involved.

    I see you mentioned in other issue #https://github.com/virtualmlnet/hackathon-2021/issues/6#issuecomment-968059783 that you have tried the torchsharp, it seems that it can work but just nor support onnx format. If so, can you share how you configure the godot project to let it work with torchsharp? or maybe you can share a demo project ?

    opened by HangedDream 1
  • Questions on performance and headless

    Questions on performance and headless

    Hi @edbeeching

    thanks for your API!

    I've got two questions: In your paper you state that 12k interactions per second are recorded. How many environments ran in parallel for this results? Do you need X for running environments featuring visual observations? Your roadmap says that headless is not supported yet.

    I'm basically looking for alternatives to ml-agents that run significantly faster. Like one Unity build with only one environment is capable of only generating like 200-300 interactions per second.

    opened by MarcoMeter 1
Releases(v0.2.2)
  • v0.2.2(Apr 21, 2022)

  • v0.2.1(Mar 28, 2022)

  • v0.2.0(Mar 24, 2022)

    Implemented a number of features, bug fixes and improvements to the documentation.

    • Including an updated sensor suite.
    • New checkpoints for the updated sensors.
    • The conda environment should now work out of the box and support GPUs. #8 #9
    • Fixed a bug with the reward function in the BallChase env #11
    • Improved documentation #7
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 17, 2021)

Owner
Edward Beeching
PhD Student in Deep Reinforcement Learning at INRIA, Chroma research group, INSA Lyon, France.
Edward Beeching
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
BMW TechOffice MUNICH 148 Dec 21, 2022
Gesture-controlled Video Game. Just swing your finger and play the game without touching your PC

Gesture Controlled Video Game Detailed Blog : https://www.analyticsvidhya.com/blog/2021/06/gesture-controlled-video-game/ Introduction This project is

Devbrat Anuragi 35 Jan 6, 2023
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).

Google Research 367 Jan 9, 2023
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 7, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 8, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Microsoft 983 Dec 23, 2022
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

null 1 Jan 23, 2022
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 5, 2023
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

Order of the Overflow 46 Sep 13, 2022
Torchlight2 lan game server tool - A message forwarding tool for Torchlight 2 lan game

Torchlight 2 Lan Game Server Tool A message forwarding tool for Torchlight 2 lan

Huaijun Jiang 3 Nov 1, 2022