AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition.

Overview

AnimalAI 3

AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition. It aims to support AI research towards unlocking cognitive capabilities and better understanding the space of possible minds. It is designed to facilitate testing across animals, humans, and AI.

This Repo

This repo contains the AnimalAI environment, some introductory python scripts for interacting with it, as well as the 900 tasks which were used in the original Animal-AI Olympics competition (and some others for demonstration purposes). Details of the tasks can be found on the AAI website where they can also be played and competition entries watched.

The environment is built using Unity ml-agents release 2.1.0-exp.1 (python version 0.27.0).

The AnimalAI environment and packages are currently only tested on linux (Ubuntu 20.04.2 LTS) with python 3.8 but have been reported working with python 3.6+, other linux distros and Windows and Mac.

The Unity Project for the environment is available here.

Installing

To get started you will need to:

  1. Clone this repo.
  2. Install the animalai python package and requirements by running pip install -e animalai from the root folder.
  3. Download the environment for your system:
OS Environment link
Linux v3.0
Mac v3.0
Windows v3.0

(Old v2.x versions can be found here)

Unzip the entire content of the archive to the (initially empty) env folder. On linux you may have to make the file executable by running chmod +x env/AnimalAI.x86_64. Note that the env folder should contain the AnimalAI.exe/.x86_84/.app depending on your system and any other folders in the same directory in the zip file.

Tutorials and Examples

Some example scripts to get started can be found in the examples folder. The following docs provide information for some common uses of the environment.

Manual Control

If you launch the environment directly from the executable or through the play.py script it will launch in player mode. Here you can control the agent with the following:

Keyboard Key Action
W move agent forwards
S move agent backwards
A turn agent left
D turn agent right
C switch camera
R reset environment

Citing

If you use the Animal-AI environment in your work you can cite the environment paper:

Crosby, M., Beyret, B., Shanahan, M., Hernández-Orallo, J., Cheke, L. & Halina, M.. (2020). The Animal-AI Testbed and Competition. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:164-176 Available here.

 @InProceedings{pmlr-v123-crosby20a, 
    title = {The Animal-AI Testbed and Competition}, 
    author = {Crosby, Matthew and Beyret, Benjamin and Shanahan, Murray and Hern\'{a}ndez-Orallo, Jos\'{e} and Cheke, Lucy and Halina, Marta}, 
    booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, 
    pages = {164--176}, 
    year = {2020}, 
    editor = {Hugo Jair Escalante and Raia Hadsell}, 
    volume = {123}, 
    series = {Proceedings of Machine Learning Research}, 
    month = {08--14 Dec}, 
    publisher = {PMLR}, 
} 

Unity ML-Agents

The Animal-AI Olympics was built using Unity's ML-Agents Toolkit.

Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627

Further the documentation for mlagents should be consulted if you want to make any changes.

Version History

  • v3.0 Note that due to the changes to controls and graphics agents trained on previous versions might not preform the same
    • Updated agent handling. The agent now comes to a stop more quickly when not moving forwards or backwards and accelerates slightly faster.
    • Added new objects, spawners, signs, goal types (see doc)
    • Added 3 animal skins to the player character.
    • Updated graphics for many objects. Default shading on many previously plain objects make it easier to determine location(s)/velocity.
    • Many improvements to documentation and examples.
    • Upgraded to Mlagents 2.1.0-exp.1 (ml-agents python version 0.27.0)
    • Fixed various bugs.
  • v2.2.3
    • Now you can specify multiple different arenas in a single yml config file ant the environment will cycle through them each time it resets
  • v2.2.2
    • Low quality version with improved fps. (will work on further improvments to graphics & fps later)
  • v2.2.1
    • Improve UI scaling wrt. screen size
    • Fixed an issue with cardbox objects spawning at the wrong sizes
    • Fixed an issue where the environment would time out after the time period even when health > 0 (no longer intended behaviour)
    • Improved Death Zone shader for weird Zone sizes
  • v2.2.0 Health and Basic Scripts
    • Switched to health-based system (rewards remain the same).
    • Updated overlay in play mode.
    • Allow 3D hot zones and death zones and make them 3D by default in old configs.
    • Added rewards that grow/decay (currently not configurable but will be added in next update).
    • Added basic Gym Wrapper.
    • Added basic heuristic agent for benchmarking and testing.
    • Improved all other python scripts.
    • Fixed a reset environment bug when resetting during training.
    • Added the ability to set the DecisionPeriod (frameskip) when instantiating and environment.
  • v2.1.1 bugfix
    • Fixed raycast length being less then diagonal length of standard arena
  • v2.1 beta release
    • Upgraded to ML-Agents release 2 (0.26.0)
    • New features
      • Added raycast observations
      • Added agent global position to observations
Comments
  • Environment timed out

    Environment timed out

    Hi,

    Thank you for improving the Animal-AI testbed. while testing gymwrapper.py I received the below error. Could you help me with how to deal with this? Version of my mlagents_envs is 0.27.0.

    Thank you so much!

    [WARNING] Environment timed out shutting down. Killing...
    Traceback (most recent call last):
      File "gymwrapper.py", line 68, in <module>
        train_agent_single_config(configuration_file=configuration_file)
      File "gymwrapper.py", line 31, in train_agent_single_config
        captureFrameRate = captureFrameRate, #Set this so the output on screen is visible - set to 0 for faster training but no visual updates
      File "/home/jdhwang/animal-ai/animalai/envs/environment.py", line 95, in __init__
        log_folder=log_folder,
      File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/environment.py", line 223, in __init__
        aca_output = self._send_academy_parameters(rl_init_parameters_in)
      File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/environment.py", line 477, in _send_academy_parameters
        return self._communicator.initialize(inputs, self._poll_process)
      File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/rpc_communicator.py", line 121, in initialize
        self.poll_for_timeout(poll_callback)
      File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/rpc_communicator.py", line 112, in poll_for_timeout
        "The Unity environment took too long to respond. Make sure that :\n"
    mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
             The environment does not need user interaction to launch
             The Agents' Behavior Parameters > Behavior Type is set to "Default"
             The environment and the Python interface have compatible versions.
    
    opened by jd730 8
  • Missing Environment Download Link

    Missing Environment Download Link

    Hi, developer. I'm trying to download animal-ai environment from the link below. http://mdcrosby.com/builds/AnimalAI_MAC_3.0.zip But, It looks like the link has been expired. Could anyone fix this problem? Thanks.

    opened by shibukazu 4
  • How is the competition preparation going on?

    How is the competition preparation going on?

    I post this here, cause there is no contact here. On the page, there's a coming competition in 2021, though 2021 is almost gone. Would you kindly share any updates for this environment and competition?

    I also hope to have comprehensive documentation for AninalAI, it is hard to use version 3 without refering version 2.

    opened by ehddnr747 2
  • Missing module in configuration_tutorial.ipynb

    Missing module in configuration_tutorial.ipynb

    Hi,

    I was going through the notebook configuration_tutorial.ipynb, and I got a missing module error when trying to run the environment in play mode example:

    from animalai.envs.arena_config import ArenaConfig

    ModuleNotFoundError: No module named 'animalai.envs.arena_config'

    Could it be that this notebook needs to be updated? I cannot see either any envs/arena_config.py

    Thanks!

    opened by ibagur 1
  • Questions about inference camera views

    Questions about inference camera views

    Hi!

    Figured this would be the most effective place to ask.

    I'm doing inference on a trained agent (separately trained in PyTorch just using the AnimalAIEnvironment class, so moving the model back into the Unity editor is a bit difficult), and I wish to view the environment from either bird-eye or third person rather than though the agent. Is it possible to switch the camera during inference like you can in play mode? This might also be nice to have available in the environment state space in the future.

    I see it's possible switch camera view in the 'Play and Watch' demo on the competition website. Is the code for that available?

    Thank you in advance!

    opened by anjagjerpe 1
  • Change arena with reset function not working

    Change arena with reset function not working

    Hi,

    After upgrading from version 2.2.1 to 3.0.1 I can no longer reset with a new arena config. It only resets to the previous environment. I've tried with a clean install of Animal AI but the issue still persists. I can still create an environment without an arena config, then give it via resetting later on.

    Is this intended behaviour or a bug?

    I’m using the Linux version of the environment.

    opened by anjagjerpe 0
  • Curriculum learning over AAI 3.0?

    Curriculum learning over AAI 3.0?

    Hi,

    I got another technical question more than an issue, as I'm planning to use AAI 3.0 for some experiments for my master thesis. I have already played a bit with AAI 2.0, in which Curriculum learning was easily implemented and inherited from ML-Agents but with some variations (external yml files for the curriculum, etc.). As I have not seen a similar example on AAI 3.0, I wonder how could I implement this, to follow a similar philosophy.

    Many thanks!

    opened by ibagur 0
  • Documentation for AnimalAIEnvironment Class

    Documentation for AnimalAIEnvironment Class

    I am trying to spawn an environment and test/implement my own networks.

    Following this, there are lots of parameters for AnimalAIEnvironment. I have no idea what each parameter does, and I cannot find any detailed documentation. It would be great help, if we have it.

    opened by ehddnr747 3
Owner
Matthew Crosby
Matthew Crosby
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

null 1 Nov 1, 2021
Help you understand Manual and w/ Clutch point while driving.

简体中文 forza_auto_gear forza_auto_gear is a tool for Forza Horizon 5. It will help us understand the best gear shift point using Manual or w/ Clutch in

null 15 Oct 8, 2022
[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network

Attention Helps CNN See Better: Hybrid Image Quality Assessment Network [CVPRW 2022] Code for Hybrid Image Quality Assessment Network [paper] [code] T

IIGROUP 49 Dec 11, 2022
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 7, 2023
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 4, 2023
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
MAGMA - a GPT-style multimodal model that can understand any combination of images and language

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning Authors repo (alphabetical) Constantin (CoEich), Mayukh (Mayukh

Aleph Alpha GmbH 331 Jan 3, 2023
Code for ICML 2021 paper: How could Neural Networks understand Programs?

OSCAR This repository contains the source code of our ICML 2021 paper How could Neural Networks understand Programs?. Environment Run following comman

Dinglan Peng 115 Dec 17, 2022
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

?? Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 5, 2023
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
Trying to understand alias-free-gan.

alias-free-gan-explanation Trying to understand alias-free-gan in my own way. [Chinese Version 中文版本] CC-BY-4.0 License. Tzu-Heng Lin motivation of thi

Tzu-Heng Lin 12 Mar 17, 2022
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

XingBowen 4 May 20, 2022
This repository builds a basic vision transformer from scratch so that one beginner can understand the theory of vision transformer.

vision-transformer-from-scratch This repository includes several kinds of vision transformers from scratch so that one beginner can understand the the

null 1 Dec 24, 2021
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

null 27 Jul 20, 2022