Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface.

Overview

Gym-TORCS

Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface. TORCS is the open-rource realistic car racing simulator recently used as RL benchmark task in several AI studies.

Gym-TORCS is the python wrapper of TORCS for RL experiment with the simple interface (similar, but not fully) compatible with OpenAI-gym environments. The current implementaion is for only the single-track race in practie mode. If you want to use multiple tracks or other racing mode (quick race etc.), you may need to modify the environment, "autostart.sh" or the race configuration file using GUI of TORCS.

This code is developed based on vtorcs (https://github.com/giuse/vtorcs) and python-client for torcs (http://xed.ch/project/snakeoil/index.html).

The detailed explanation of original TORCS for AI research is given by Daniele Loiacono et al. (https://arxiv.org/pdf/1304.1672.pdf)

Because torcs has memory leak bug at race reset. As an ad-hoc solution, we relaunch and automate the gui setting in torcs. Any better solution is welcome!

Requirements

We are assuming you are using Ubuntu 14.04 LTS/16.04 LTS machine and installed

Example Code

The example code and agent are written in example_experiment.py and sample_agent.py.

Initialization of the Race

After the insallation of vtorcs-RL-color, you need to initialize the race setting. You can find the detailed explanation in a document (https://arxiv.org/pdf/1304.1672.pdf), but here I show the simple gui-based setting.

So first you need to run

sudo torcs

in the terminal, the GUI of TORCS should be launched. Then, you need to choose the race track by following the GUI (Race --> Practice --> Configure Race) and open TORCS server by selecting Race --> Practice --> New Race. This should result that TORCS keeps a blue screen with several text information.

If you need to treat the vision input in your AI agent, you have to set the small image size in TORCS. To do so, you have to run

python snakeoil3_gym.py

in the second terminal window after you open the TORCS server (just as written above). Then the race starts, and you can select the driving-window mode by F2 key during the race.

After the selection of the driving-window mode, you need to set the appropriate gui size. This is done by using the display option mode in Options --> Display. You can select the Screen Resolution, and you need to select 64x64 for visual input (our immplementation only support this screen size, other screen size results the unreasonable visual information). Then, you need to shut down TORCS to complete the configuration for the vision treatment.

Simple How-To

from gym_torcs import TorcsEnv

#### Generate a Torcs environment
# enable vision input, the action is steering only (1 dim continuous action)
env = TorcsEnv(vision=True, throttle=False)

# without vision input, the action is steering and throttle (2 dim continuous action)
# env = TorcsEnv(vision=False, throttle=True)

ob = env.reset(relaunch=True)  # with torcs relaunch (avoid memory leak bug in torcs)
# ob = env.reset()  # without torcs relaunch

# Generate an agent
from sample_agent import Agent
agent = Agent(1)  # steering only
action = agent.act(ob, reward, done, vision=True)

# single step
ob, reward, done, _ = env.step(action)

# shut down torcs
env.end()

Add Noise in Low-dim Sensors

If you want to apply sensor noise in low-dimensional sensors, you should

os.system('torcs -nofuel -nodamage -nolaptime -vision -noisy &')
os.system('torcs -nofuel -nolaptime -noisy &')

at 33 & 35th lines in gym_torcs.py

Great Application

gym-torcs was utilized in DDPG experiment with Keras by Ben Lau. This experiment is really great!

https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html

Acknowledgement

gym_torcs was developed during the spring internship 2016 at Preferred Networks.

Comments
  • Can't initialize properly - fopen (config/graph.xml,

    Can't initialize properly - fopen (config/graph.xml, "wb") failed

    Hi,

    I am trying to get gym_torcs to work on my Ubuntu 14.04. When I execute the sample_experiement.py file, I get the following error:

    gfParmSetStr: fopen (config/graph.xml, "wb") failed
    

    Full log:

    Fuel consumption disabled!
    Car damages disabled!
    Laptime limit disabled!
    Image generation is ON!
    Visual Properties Report
    ------------------------
    Compatibility mode, properties unknown.
    Waiting for request on port 3101
    TORCS Experiment Start.
    Episode : 0
    Client connected on 3101..............
    OpenAL backend info:
      Vendor: OpenAL Community
      Renderer: OpenAL Soft
      Version: 1.1 ALSOFT 1.14
      Available sources: 256
      Available buffers: 1024 or more
      Dynamic Sources: requested: 235, created: 235
      #static sources: 21
      #dyn sources   : 235
    gfParmSetStr: fopen (config/graph.xml, "wb") failed
    sw 64 - sh 64 - vw 64 - vh 64 - imgsize 12288
    Traceback (most recent call last):
      File "example_experiment.py", line 26, in <module>
        ob = env.reset(relaunch=True)
      File "/home/pavitrakumar/Desktop/gym_torcs-master/gym_torcs.py", line 192, in reset
        self.observation = self.make_observaton(obs)
      File "/home/pavitrakumar/Desktop/gym_torcs-master/gym_torcs.py", line 272, in make_observaton
        image_rgb = self.obs_vision_to_image_rgb(raw_obs[names[8]])
      File "/home/pavitrakumar/Desktop/gym_torcs-master/gym_torcs.py", line 242, in obs_vision_to_image_rgb
        return np.array([r, g, b], dtype=np.uint8)
    TypeError: long() argument must be a string or a number, not 'NoneType'
    pavitrakumar@pavitrakumar-PC:~/Desktop/gym_torcs-master$ Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    Timeout for client answer
    .
    .
    .
    

    I am trying this on python 2.x with the necessary imports to make it work (print and division), but I am not sure where the requested file (graph.xml) is located.

    opened by pavitrakumar78 11
  • torcs restarting again and again

    torcs restarting again and again

    **gym_torcs git:(master)** ✗ python snakeoil3_gym.py
    Waiting for server on 3101............
    Count Down : 4
    Waiting for server on 3101............
    Count Down : 3
    Waiting for server on 3101............
    Count Down : 2
    Waiting for server on 3101............
    Count Down : 1
    Waiting for server on 3101............
    Count Down : 0
    Waiting for server on 3101............
    Count Down : -1
    relaunch torcs
    Visual Properties Report
    ------------------------
    Compatibility mode, properties unknown.
    WARNING: ssgLoadTexture: Cannot determine file type for './(null)'
    Waiting for server on 3101............
    Count Down : 4
    Waiting for server on 3101............
    Count Down : 3
    Waiting for server on 3101............
    Count Down : 2
    Waiting for server on 3101............
    Count Down : 1
    Waiting for server on 3101............
    Count Down : 0
    Waiting for server on 3101............
    Count Down : -1
    relaunch torcs
    Visual Properties Report
    ------------------------
    Compatibility mode, properties unknown.
    WARNING: ssgLoadTexture: Cannot determine file type for './(null)'
    
    opened by abhinavagarwal07 2
  • Running multiple gym_torcs experiments in parallel on the same machine

    Running multiple gym_torcs experiments in parallel on the same machine

    Hi, I want to launch multiple gym_torcs environments on my machine so that I can run multiple simulations in parallel. Currently when I try to launch an extra simulation, the previous one crashes. The problem is likely the fact that the client and server always talk through the 3001 port. Is there a way to assign different ports to different instances of gym_torcs? Thank you, Anirban

    opened by Santara 2
  • fix argument to isnan() in simu.cpp

    fix argument to isnan() in simu.cpp

    Hi, I was trying to compile vtorcs-rgb on Ubuntu 16.04 but encountered an error about argument to the isnan() function. Apparently it expects a float, so I quickly tried casting car->ctrl-gear to float to avoid compile errors and it compiled just fine for me.

    opened by billyzs 2
  • libpng compilation fix

    libpng compilation fix

    Got compilation error on make:

    img.cpp: In function 'unsigned char* GfImgReadPng(const char_, int_, int_, float)': img.cpp:101:20: error: invalid use of incomplete type 'png_struct {aka struct png_struct_def}' if (setjmp(png_ptr->jmpbuf)) ^ In file included from img.cpp:31:0: /usr/include/png.h:595:16: error: forward declaration of 'png_struct {aka struct png_struct_def}' typedef struct png_struct_def png_struct; ^ In file included from /usr/include/pngconf.h:72:0, from /usr/include/png.h:485, from img.cpp:31: img.cpp: In function 'int GfImgWritePng(unsigned char_, const char*, int, int)': img.cpp:232:20: error: invalid use of incomplete type 'png_struct {aka struct png_struct_def}' if (setjmp(png_ptr->jmpbuf)) { ^ In file included from img.cpp:31:0: /usr/include/png.h:595:16: error: forward declaration of 'png_struct {aka struct png_struct_def}' typedef struct png_struct_def png_struct;

    Seems like new versions of libpng provide access to png_struct only via functions, fixed it. png2jpg.c contained the same code, so I replaced it.

    opened by NoNick 2
  • Error while 'making' Vtorcs

    Error while 'making' Vtorcs

    Hi,

    When I execute ./configure in vtorcs folder, the command runs smoothly. However when i execute the next command which is make I get the error as documented in this pastebin link. Any idea why I get so many errors? I suspect that the makefile is not well formatted.

    Thanks

    opened by sahiliitm 1
  • extract img is completely black in torcs

    extract img is completely black in torcs

    I installed torcs according to the requirements and set the resolution to 64×64, but the training data image obtained after running the program is completely black

    opened by xkxiong 0
  • ALSA error when running snakeoil3_gym.py

    ALSA error when running snakeoil3_gym.py

    I have installed gym_torcs and opened TORCS server. But the server is closed with error messages after I run

    $ python snakeoil3_gym.py
    

    Error message is

    $ torcs -vision
    
    Image generation is ON!
    Visual Properties Report
    ------------------------
    Compatibility mode, properties unknown.
    Waiting for request on port 3101
    ALSA lib pcm_dmix.c:1052:(snd_pcm_dmix_open) unable to open slave
    AL lib: (EE) ALCplaybackAlsa_open: Could not open playback device 'default': No such file or directory
    terminate called after throwing an instance of 'char const*'
    /usr/local/bin/torcs: line 53:  4370 Aborted                 (core dumped) $LIBDIR/torcs-bin -l $LOCAL_CONF -L $LIBDIR -D $DATADIR $*
    

    Can I ask for some help with this problem? I am working on the remote server with OS Ubuntu 18.04.4 LTS and A5000 GPU.

    opened by wjh601 0
  • About image generation

    About image generation

    The generated image is only the lower left corner of the entire window. I can't get image information about the whole interface. How can I solve this proble. The image I obtained is as follows: IMG032 m。

    opened by chihuicong 0
  • freeglut (/usr/local/lib/torcs/torcs-bin):  ERROR:  Internal error <FBConfig with necessary capabilities not found> in function fgOpenWindow

    freeglut (/usr/local/lib/torcs/torcs-bin): ERROR: Internal error in function fgOpenWindow

    Hello, I have the following error when running the code. How to solve it? Thank you freeglut (/usr/local/lib/torcs/torcs-bin): ERROR: Internal error in function fgOpenWindow

    opened by Dominique-github 0
  • Problem with make in vtorcs

    Problem with make in vtorcs

    I was able to install gym-torcs without any problems first time I tried but I had to format my computer and now I get this following error after I enter make command in vtorcs-RL-color. I tried fixes on other issues but they don't seem to be working.

    warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] g++ -shared -o libclient.so entry.o mainmenu.o splash.o exitmenu.o optionmenu.o -L/home/ekim/gym_torcs/vtorcs-RL-color/export/lib -lalut -L/usr/lib -lplibssg -lplibsg -lplibul /usr/bin/ld: /usr/lib/libplibssg.a(ssgBase.o): relocation R_X86_64_PC32 against symbol `_ZTV7ssgBase' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status

    (I attached the entire log)

    error.txt

    opened by mekimvural 1
  • Putting ice on the Road.

    Putting ice on the Road.

    Good Morning Everyone, I am currently preparing for a Research Project about Reinforcement Learning. Specifically, I want to do work about Concept Drift detection and handling. To that End, I need a Machine Learning Problem with Concept Drift. My Idea was, to use Torcs as an Environment. The Machine Learning Algorithm would then Learn the Central concept of gameplay, how to controll the Car. This, i already know is possible. However, there is no Concept Drift in this Problem, because the behavior of the Car does not change, while on the road. My idea was to introduce Concept drift by having the Road slowly ice over . Not visually, of course, only in regard to the way the Car behaves on the track. Is there any way to do this in TORCS? Thank you for your time.

    opened by OlivertheMattes 0
Owner
naoto yoshida
Ugoku-Namakemono (Moving Sloth). Computational philosopher. Connectionist. Behavior designer of autonomous robots.
naoto yoshida
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

null 1 Nov 10, 2021
Deep Q Learning with OpenAI Gym and Pokemon Showdown

pokemon-deep-learning An openAI gym project for pokemon involving deep q learning. Made by myself, Sam Little, and Layton Webber. This code captures g

null 2 Dec 22, 2021
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI

PyTorch implementation of OpenAI's Finetuned Transformer Language Model This is a PyTorch implementation of the TensorFlow code provided with OpenAI's

Hugging Face 1.4k Jan 5, 2023
Using CNN to mimic the driver based on training data from Torcs

Behavioural-Cloning-in-autonomous-driving Using CNN to mimic the driver based on training data from Torcs. Approach First, the data was collected from

Sudharshan 2 Jan 5, 2022
Customizable RecSys Simulator for OpenAI Gym

gym-recsys: Customizable RecSys Simulator for OpenAI Gym Installation | How to use | Examples | Citation This package describes an OpenAI Gym interfac

Xingdong Zuo 14 Dec 8, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 8, 2022
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies.

Crypto_Bot Uses Open AI Gym environment to create autonomous cryptocurrency bot to trade cryptocurrencies. Steps to get started using the bot: Sign up

null 21 Oct 3, 2022
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI Gym toolkit.

Robin Henry 99 Dec 12, 2022
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Lucas Alegre 74 Jan 3, 2023
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Microsoft 983 Dec 23, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 2, 2023
CowHerd is a partially-observed reinforcement learning environment

CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to help capture them. The implementation of CowHerd is based on the Crafter environment.

Danijar Hafner 6 Mar 6, 2022
Reinforcement learning models in ViZDoom environment

DoomNet DoomNet is a ViZDoom agent trained by reinforcement learning. The agent is a neural network that outputs a probability of actions given only p

Andrey Kolishchak 126 Dec 9, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
Multi-agent reinforcement learning algorithm and environment

Multi-agent reinforcement learning algorithm and environment [en/cn] Pytorch implements multi-agent reinforcement learning algorithms including IQL, Q

万鲲鹏 7 Sep 20, 2022