A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Overview

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge

This is a platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Change Log

  • 2022-05-16: improved engine backend (Linux) with better stability (v1.0)
    • Check out Supported Platforms for download links.
    • Make sure to update to the latest version of the engine if you would like to use depth map or enemy state features.
  • 2022-05-18: updated engine backend for Windows and MacOS (v1.0)

Competition Overview

With a focus on learning intelligent agents in open-world games, this year we are hosting a new contest called Wilderness Scavenger. In this new game, which features a Battle Royale-style 3D open-world gameplay experience and a random PCG-based world generation, participants must learn agents that can perform subtasks common to FPS games, such as navigation, scouting, and skirmishing. To win the competition, agents must have strong perception of complex 3D environments and then learn to exploit various environmental structures (such as terrain, buildings, and plants) by developing flexible strategies to gain advantages over other competitors. Despite the difficulty of this goal, we hope that this new competition can serve as a cornerstone of research in AI-based gaming for open-world games.

Features

  • A light-weight 3D open-world FPS game developed with Unity3D game engine
  • Rendering-off game acceleration for fast training and evaluation
  • Large open world environment providing high freedom of agent behaviors
  • Highly customizable game configuration with random supply distribution and dynamic refresh
  • PCG-based map generation with randomly spawned buildings, plants and obstacles (100 training maps)
  • Interactive replay tool for game record visualization

Basic Structures

We developed this repository to provide a training and evaluation platform for the researchers interested in open-world FPS game AI. For getting started quickly, a typical workspace structure when using this repository can be summarized as follows:

.
├── examples  # providing starter code examples and training baselines
│   ├── envs/...
│   ├── basic.py
│   ├── basic_track1_navigation.py
│   ├── basic_track2_supply_gather.py
│   ├── basic_track3_supply_battle.py
│   ├── baseline_track1_navigation.py
│   ├── baseline_track2_supply_gather.py
│   └── baseline_track3_supply_battle.py
├── inspirai_fps  # the game play API source code
│   ├── lib/...
│   ├── __init__.py
│   ├── gamecore.py
│   ├── raycast_manager.py
│   ├── simple_command_pb2.py
│   ├── simple_command_pb2_grpc.py
│   └── utils.py
└── fps_linux  # the engine backend (Linux)
    ├── UnityPlayer.so
    ├── fps.x86_64
    ├── fps_Data/...
    └── logs/...
  • fps_linux (requires to be manually downloaded and unzipped to your working directory): the (Linux) engine backend extracted from our game development project, containing all the game related assets, binaries and source codes.
  • inspirai_fps: the python gameplay API for agent training and testing, providing the core Game class and other useful tool classes and functions.
  • examples: we provide basic starter codes for each game mode targeting each track of the challenge, and we also give out our implementation of some baseline solutions based on ray.rllib reinforcement learning framework.

Supported Platforms

We support the multiple platforms with different engine backends, including:

Installation (from source)

To use the game play API, you need to first install the package inspirai_fps by following the commands below:

git clone https://github.com/inspirai/wilderness-scavenger
cd wilderness-scavenger
pip install .

We recommend installing this package with python 3.8 (which is our development environment), so you may first create a virtual env using conda and finish installation:

$ conda create -n WildScav python=3.8
$ conda activate WildScav
(WildScav) $ pip install .

Installation (from PyPI)

Note: this may not be maintained in time. We strongly recommend using the installation method above

Alternatively, you can install the package from PyPI directly. But note that this will only install the gameplay API inspirai_fps, not the backend engine. So you still need to manually download the correct engine backend from the Supported Platfroms section.

pip install inspirai-fps

Loading Engine Backend

To successfully run the game, you need to make sure the game engine backend for your platform is downloaded and set the engine_dir parameter of the Game init function correctly. For example, here is a code snippet in the script example/basic.py:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--engine-dir", type=str, default="../fps_linux")
...
game = Game(..., engine_dir=args.engine_dir, ...)

Loading Map Data

To get access to some features like realtime depth map computation or randomized player spawning, you need to load the map data and load them into the Game. After this, once you turn on the depth map rendering, the game server will automatically compute a depth map viewing from the player's first person perspective at each time step.

  1. Download map data from Google Drive or Feishu and decompress the downloaded file to your preferred directory (e.g., <WORKDIR>/map_data).
  2. Set map_dir parameter of the Game initializer accordingly
  3. Set the map_id as you like
  4. Turn on the function of depth map computation
  5. Turn on random start location to spawn agents at random places

Read the following code snippet in the script examples/basic.py as an example:

from inspirai_fps import Game, ActionVariable
...
parser.add_argument("--map-id", type=int, default=1)
parser.add_argument("--use-depth-map", action="store_true")
parser.add_argument("--random-start-location", action="store_true")
parser.add_argument("--map-dir", type=str, default="../map_data")
...
game = Game(map_dir=args.map_dir, ...)
game.set_map_id(args.map_id)  # this will load the valid locations of the specified map
...
if args.use_depth_map:
    game.turn_on_depth_map()
    game.set_depth_map_size(380, 220, 200)  # width (pixels), height (pixels), depth_limit (meters)
...
if args.random_start_location:
    for agent_id in range(args.num_agents):
        game.random_start_location(agent_id, indoor=False)  # this will randomly spawn the player at a valid outdoor location, or indoor location if indoor is True
...
game.new_episode()  # start a new episode, this will load the mesh of the specified map

Gameplay Visualization

We have also developed a replay visualization tool based on the Unity3D game engine. It is similar to the spectator mode common in multiplayer FPS games, which allows users to interactively follow the gameplay. Users can view an agent's action from different perspectives and also switch between multiple agents or different viewing modes (e.g., first person, third person, free) to see the entire game in a more immersive way. Participants can download the tool for their specific platforms here:

To use this tool, follow the instruction below:

  • Decompress the downloaded file to anywhere you prefer.
  • Turn on recording function with game.turn_on_record(). One record file will be saved at the end of each episode.

Find the replay files under the engine directory according to your platform:

  • Linux: <engine_dir>/fps_Data/StreamingAssets/Replay
  • Windows: <engine_dir>\FPSGameUnity_Data\StreamingAssets\Replay
  • MacOS: <engine_dir>/Contents/Resources/Data/StreamingAssets/Replay

Copy replay files you want to the replay tool directory according to your platform and start the replay tool.

For Windows users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/FPSGameUnity_Data/StreamingAssets/Replay
  • Run FPSGameUnity.exe to start the application.

For MacOS users:

  • Copy the replay file (e.g. xxx.bin) into <replayer_dir>/Contents/Resources/Data/StreamingAssets/Replay
  • Run fps.app to start the application.

In the replay tool, you can:

  • Select the record you want to watch from the drop-down menu and click PLAY to start playing the record.
  • During the replay, users can make the following operations
    • Press Tab: pause or resume
    • Press E: switch observation mode (between first person, third person, free)
    • Press Q: switch between multiple agents
    • Press ECS: stop replay and return to the main menu
Comments
  • is it possible headless mode?

    is it possible headless mode?

    HI, inspirai.

    I'm trying to train my model. I wonder is it possible headless mode? because, Unity enviornment is paused frequently. I had to click client for tackling this problem always... when I trained on windows. In case of linux version, can possible headlees mode?

    opened by LeejwUniverse 3
  • Navigation Track中 agent返回的movespeed数据异常

    Navigation Track中 agent返回的movespeed数据异常

    请问 在Navigation Track中 agent返回的movespeed数据好像异常。

    输入action为 walkdir [0, 90,180,270] walkspeed [3, 6] turn_lr [-3, 0 ,3]

    打印发现agent movespeed 从0递增到 3000+ 而非说明中的 m/s

    环境是 linux 的backend【5.16】

    opened by chenjinghs 3
  • When I first try to acess this project, I can't run basic.py .I used the backend of linux.

    When I first try to acess this project, I can't run basic.py .I used the backend of linux.

    Running Episodes ... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- Traceback (most recent call last): File "/home/leopold/marl/wilderness-scavenger/examples/basic.py", line 88, in game.new_episode() File "/home/leopold/marl/wilderness-scavenger/inspirai_fps/gamecore.py", line 796, in new_episode self.__update_request() File "/home/leopold/marl/wilderness-scavenger/inspirai_fps/gamecore.py", line 783, in __update_request if self.__log_trajectory: AttributeError: 'Game' object has no attribute '_Game__log_trajectory'

    opened by Leopold-Fitz-AI 2
  • 打开环境报错

    打开环境报错

    OSError: dlopen(/Users/mxj/digisky/qiyuan/wilderness-scavenger/inspirai_fps/lib/libraycaster.dylib, 0x0006): tried: '/Users/mxj/digisky/qiyuan/wilderness-scavenger/inspirai_fps/lib/libraycaster.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64h'))

    opened by Air-legend 2
  • About engine backend download

    About engine backend download

    • We have removed the default Linux engine backend from the source codes to reduce the size of the repo.
    • We will update download links of the engine backend for Windows, Linux and MacOS.
    • Two download sources (Google Drive and Feishu) are provided.
    good first issue 
    opened by FateDawnLeon 2
  • Is the depth map available for all tracks?

    Is the depth map available for all tracks?

    I checked submission_template/eval_track_1_1.py and game.turn_on_depth_map() is not called, so state.depth is None (same for other trucks).

    Since game.set_depth_map_size(DEPTH_MAP_WIDTH, DEPTH_MAP_HEIGHT, DEPTH_MAP_FAR) is called, it seems that the depth map is available, right?

    opened by yoshinobc 1
  • python visualize_depth_map.py报错

    python visualize_depth_map.py报错

    Loaded valid locations from ../map_data/001.json
    concatenating texture: may result in visual artifacts
    Server started ...
    E0527 15:19:31.224830216 2967962 fork_posix.cc:70]           Fork support is only compatible with the epoll1 and poll polling strategies
    Unity3D started ...
    Unity3D connected ...
    Server stopped ...
    Unity3D killed ...
    
    opened by 315930399 1
  • baseline中使用的ray版本?

    baseline中使用的ray版本?

    直接pip install ray再运行baseline_track1会报错,求指教 Traceback (most recent call last): File "baseline_track1_navigation.py", line 28, in import ray File "/opt/conda/envs/wildscav/lib/python3.8/site-packages/ray/init.py", line 72, in import ray._raylet # noqa: E402 File "python/ray/_raylet.pyx", line 108, in init ray._raylet File "/opt/conda/envs/wildscav/lib/python3.8/site-packages/ray/exceptions.py", line 7, in from ray.core.generated.common_pb2 import RayException, Language, PYTHON File "/opt/conda/envs/wildscav/lib/python3.8/site-packages/ray/core/generated/common_pb2.py", line 33, in _descriptor.EnumValueDescriptor( File "/opt/conda/envs/wildscav/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 755, in new _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are:

    1. Downgrade the protobuf package to 3.20.x or lower.
    2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
    opened by chawuchen 0
  • Windows下raycaster.dll读取失败

    Windows下raycaster.dll读取失败

    windows10, py3.8 通过“pip install .”安装的inspirai_fps 运行basic.py报错: Traceback (most recent call last): File "D:/inspirai_Wilderness Scavenger/code/wilderness-scavenger-master/examples/basic.py", line 53, in game = Game(map_dir=args.map_dir, engine_dir=args.engine_dir, server_port=args.port) File "D:\Users\Miniconda3\lib\site-packages\inspirai_fps\gamecore.py", line 364, in init self._ray_tracer = RaycastManager(mesh_file_path) File "D:\Users\Miniconda3\lib\site-packages\inspirai_fps\raycast_manager.py", line 46, in init self.ray_lib = ctypes.CDLL(lib_path, winmode=0) File "D:\Users\Miniconda3\lib\ctypes_init.py", line 381, in init self._handle = _dlopen(self._name, mode) FileNotFoundError: Could not find module 'D:\Users\Miniconda3\lib\site-packages\inspirai_fps\lib\raycaster.dll' (or one of its dependencies). Try using the full path with constructor syntax. Exception ignored in: <function RaycastManager.del at 0x0000024B9EFE68B0> Traceback (most recent call last): File "D:\Users\Miniconda3\lib\site-packages\inspirai_fps\raycast_manager.py", line 207, in del self._free_mesh() File "D:\Users\Miniconda3\lib\site-packages\inspirai_fps\raycast_manager.py", line 200, in _free_mesh c_func = self.ray_lib.free_mesh AttributeError: 'RaycastManager' object has no attribute 'ray_lib'

    opened by HuihuiTong 0
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

null 122 Dec 28, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.2k Feb 12, 2021
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 5, 2023
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 5, 2023
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 2, 2022
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022
Learning Open-World Object Proposals without Learning to Classify

Learning Open-World Object Proposals without Learning to Classify Pytorch implementation for "Learning Open-World Object Proposals without Learning to

Dahun Kim 149 Dec 22, 2022
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 1, 2023
A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

George Gunter 4 Nov 14, 2022
RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems

RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems This is our implementation for the paper: Weibo Gao, Qi Liu*, Zhenya Hu

BigData Lab @USTC  中科大大数据实验室 10 Oct 16, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 1, 2023
the code for paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration"

EOW-Softmax This code is for the paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration". Accepted by ICCV21. Usage Commnd exa

Yezhen Wang 36 Dec 2, 2022
Video-based open-world segmentation

UVO_Challenge Team Alpes_runner Solutions This is an official repo for our UVO Challenge solutions for Image/Video-based open-world segmentation. Our

Yuming Du 84 Dec 22, 2022
Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application

QuickDraw - AirGesture Introduction Here is my python source code for QuickDraw - an online game developed by google, combined with AirGesture - a sim

Viet Nguyen 89 Dec 18, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022