An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.

Overview

Multi-Car Racing Gym Environment

This repository contains MultiCarRacing-v0 a multiplayer variant of Gym's original CarRacing-v0 environment.

This environment is a simple multi-player continuous contorl task. The state consists of 96x96 pixels for each player. The per-player reward is -0.1 every timestep and +1000/num_tiles * (num_agents-past_visitors)/num_agents for each tile visited. For example, in a race with 2 agents, the first agent to visit a tile receives a reward of +1000/num_tiles and the second agent to visit the tile receives a reward of +500/num_tiles for that tile. Each agent can only be rewarded once for visiting a particular tile. The motivation behind this reward structure is to be sufficiently dense for simple learnability of the basic driving skill while incentivising competition.

Installation

git clone https://github.com/igilitschenski/multi_car_racing.git
cd multi_car_racing
pip install -e .

Basic Usage

After installation, the environment can be tried out by running:

python -m gym_multi_car_racing.multi_car_racing

This will launch a two-player variant (each player in its own window) that can be controlled via the keyboard (player 1 via arrow keys and player 2 via W, A, S, D).

Let's quickly walk through how this environment can be used in your code:

import gym
import gym_multi_car_racing

env = gym.make("MultiCarRacing-v0", num_agents=2, direction='CCW',
        use_random_direction=True, backwards_flag=True, h_ratio=0.25,
        use_ego_color=False)

obs = env.reset()
done = False
total_reward = 0

while not done:
  # The actions have to be of the format (num_agents,3)
  # The action format for each car is as in the CarRacing-v0 environment.
  action = my_policy(obs)

  # Similarly, the structure of this is the same as in CarRacing-v0 with an
  # additional dimension for the different agents, i.e.
  # obs is of shape (num_agents, 96, 96, 3)
  # reward is of shape (num_agents,)
  # done is a bool and info is not used (an empty dict).
  obs, reward, done, info = env.step(action)
  total_reward += reward
  env.render()

print("individual scores:", total_reward)

Overview of environment parameters:

Parameter Type Description
num_agents int Number of agents in environment (Default: 2)
direction str Winding direction of the track. Can be 'CW' or 'CCW' (Default: 'CCW')
use_random_direction bool Randomize winding direction of the track. Disregards direction if enabled (Default: True).
backwards_flag bool Shows a small flag if agent driving backwards (Default: True).
h_ratio float Controls horizontal agent location in the state (Default: 0.25)
use_ego_color bool In each view the ego vehicle has the same color if activated (Default: False).

This environment contains the CarRacing-v0 environment as a special case. It can be created via

env = gym.make("MultiCarRacing-v0", num_agents=1, use_random_direction=False, 
        backwards_flag=False)

Deprecation Warning: We might further simplify the environment in the future. Our current thoughts on deprecation concern the following functionalities.

  • The direction related arguments (use_random_direction & direction) were initially aded to make driving fairer as the agents' spawning locations were fixed. We resolved this unfairnes by randomizing the start positions of the agents instead.
  • The impact of backwards_flag seems very little in practice.
  • Similarly, it was interesting to play around with placing the agent at different horizontal locations of the observation (via h_ratio) but the default from CarRacing-v0 ended up working well.
  • The environment also contains some (not active) code on allowing penalization of driving backwards. We were worried that agents might go backwards to have more tiles on which they are first but it turned out not to be necessary for successfull learning.

We are interested in any feedback regarding these planned deprecations.

Citation

If you find this environment useful, please cite our CoRL 2020 paper:

@inproceedings{SSG2020,
    title={Deep Latent Competition: Learning to Race Using Visual
      Control Policies in Latent Space},
    author={Wilko Schwarting and Tim Seyde and Igor Gilitschenski
      and Lucas Liebenwein and Ryan Sander and Sertac Karaman and Daniela Rus},
    booktitle={Conference on Robot Learning},
    year={2020}
}
You might also like...
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

A multi-entity Transformer for multi-agent spatiotemporal modeling.
A multi-entity Transformer for multi-agent spatiotemporal modeling.

baller2vec This is the repository for the paper: Michael A. Alcorn and Anh Nguyen. baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotempor

Multi-task Multi-agent Soft Actor Critic for SMAC

Multi-task Multi-agent Soft Actor Critic for SMAC Overview The CARE formulti-task: Multi-Task Reinforcement Learning with Context-based Representation

Multi-objective gym environments for reinforcement learning.
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

A parallel framework for population-based multi-agent reinforcement learning.
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

Comments
  • Questions about the Behavior Learning of the DLC algorithm and `use_ego_color` in this repo

    Questions about the Behavior Learning of the DLC algorithm and `use_ego_color` in this repo

    Thanks for sharing this competitive environment.

    I am trying to implement the Deep Latent Competition algorithm, and I find that the description of Behavior Learning might be kind of confusing.

    image

    This process leverages imagined trajectories (s1,s2,a1,a2) over a horizon of H starting from each timestep t within the sampled batch sequence, where opponent behavior is estimated by querying the action model with predicted latent viewpoints. The resulting algorithm optimizes policies by back-propagating analytic value gradients of imagined self-play trajectories through the learned multi-agent world model.

    Figure 2(b) seems to suggest that the behavior optimization is conducted on the true states s1, s2 (starting from the ground truth observations of both agents) and actions a1, a2 based on these states and the policy. I was wondering where the estimated opponent behavior and predicted latent viewpoints are used in this Behavior Learning process. It seems that the encoders are not involved in the middle of the imagined trajectories, and the predicted latent viewpoints are not involved too.

    Besides, have you evaluated the DLC with use_ego_color=True? With use_ego_color=True, it seems that both agents can share the encoder model, observation model, and reward model. In contrast, which model can be shared by two agents when use_ego_color=False? Are these two agents using the independent encoder models, observation models, and reward models when use_ego_color=False?

    Thank you.

    question 
    opened by lkwq007 2
  • Issue with starting and simulation

    Issue with starting and simulation

    Hello, I'm learning reinforcement learning now and would like to conduct multi agent car racing RL simulation using your code.

    It seems that your code is well-constructed and easy to understand, but I'm new to here, so have several questions now.

    1. To run the multi-agent car racing simulation with any RL code, is it okay to just place the RL code within the same folder that multi_car_racing.py is in?

    2. To make connection with environment you gave and "my_policy" code(e.g. action = my_policy(obs) in Readme.md), how can I know the shape of "env" and build the RL code?

    Any kind of advice will be appreciated. Thanks in advance,

    Woosuk

    question 
    opened by woosuk1103 1
Owner
Igor Gilitschenski
Igor Gilitschenski
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
Customizable RecSys Simulator for OpenAI Gym

gym-recsys: Customizable RecSys Simulator for OpenAI Gym Installation | How to use | Examples | Citation This package describes an OpenAI Gym interfac

Xingdong Zuo 14 Dec 8, 2022
Deep Q Learning with OpenAI Gym and Pokemon Showdown

pokemon-deep-learning An openAI gym project for pokemon involving deep q learning. Made by myself, Sam Little, and Layton Webber. This code captures g

null 2 Dec 22, 2021
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 8, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI

PyTorch implementation of OpenAI's Finetuned Transformer Language Model This is a PyTorch implementation of the TensorFlow code provided with OpenAI's

Hugging Face 1.4k Jan 5, 2023
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 2, 2022
Multi-agent reinforcement learning algorithm and environment

Multi-agent reinforcement learning algorithm and environment [en/cn] Pytorch implements multi-agent reinforcement learning algorithms including IQL, Q

万鲲鹏 7 Sep 20, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

null 1 Nov 10, 2021
Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies.

Crypto_Bot Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies. Steps to get started using the bot: Sign up

null 21 Oct 3, 2022