Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

Overview

PPO-based Autonomous Navigation for Quadcopters

license

This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous navigation in a corridor environment with a quadcopter. There are blocks having circular opening for the drone to go through for each 4 meters. The expectation is that the agent navigates through these openings without colliding with blocks. This project currently runs only on Windows since Unreal environments were packaged for Windows.

🛠️ Libraries & Tools

Overview

The training environment has 9 sections with different textures and hole positions. The agent starts at these sections randomly. The starting point of the agent is also random within a specific region in the yz-plane.

Observation Space

  • State is in the form of a RGB image taken by the front camera of the agent.
  • Image shape: 50 x 50 x 3

Action Space

  • There are 9 discrete actions.

Environment setup to run the codes

#️⃣ 1. Clone the repository

git clone https://github.com/bilalkabas/PPO-based-Autonomous-Navigation-for-Quadcopters

#️⃣ 2. From Anaconda command prompt, create a new conda environment

I recommend you to use Anaconda or Miniconda to create a virtual environment.

conda create -n ppo_drone python==3.8

#️⃣ 3. Install required libraries

Inside the main directory of the repo

conda activate ppo_drone
pip install -r requirements.txt

#️⃣ 4. (Optional) Install Pytorch for GPU

You must have a CUDA supported NVIDIA GPU.

Details for installation

For this project, I used CUDA 11.0 and the following conda installation command to install Pytorch:

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

#️⃣ 4. Edit settings.json

Content of the settings.json should be as below:

The setting.json file is located at Documents\AirSim folder.

{
    "SettingsVersion": 1.2,
    "LocalHostIp": "127.0.0.1",
    "SimMode": "Multirotor",
    "ClockSpeed": 20,
    "ViewMode": "SpringArmChase",
    "Vehicles": {
        "drone0": {
            "VehicleType": "SimpleFlight",
            "X": 0.0,
            "Y": 0.0,
            "Z": 0.0,
            "Yaw": 0.0
        }
    },
    "CameraDefaults": {
        "CaptureSettings": [
            {
                "ImageType": 0,
                "Width": 50,
                "Height": 50,
                "FOV_Degrees": 120
            }
        ]
    }
  }

How to run the training?

Make sure you followed the instructions above to setup the environment.

#️⃣ 1. Download the training environment

Go to the releases and download TrainEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and start the training

So, inside the repository

python main.py

How to run the pretrained model?

Make sure you followed the instructions above to setup the environment. To speed up the training, the simulation runs at 20x speed. You may consider to change the "ClockSpeed" parameter in settings.json to 1.

#️⃣ 1. Download the test environment

Go to the releases and download TestEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and run the trained model

So, inside the repository

python policy_run.py

Training results

The trained model in saved_policy folder was trained for 280k steps.

Picture2

Test results

The test environment has different textures and hole positions than that of the training environment. For 100 episodes, the trained model is able to travel 17.5 m on average and passes through 4 holes on average without any collision. The agent can pass through at most 9 holes in test environment without any collision.

Author

License

This project is licensed under the GNU Affero General Public License.

You might also like...
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.

Obstacle Tower Challenge using Deep Reinforcement Learning Unity Obstacle Tower is a challenging realistic 3D, third person perspective and procedural

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions. A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

PPO Lagrangian in JAX

PPO Lagrangian in JAX This repository implements PPO in JAX. Implementation is tested on the safety-gym benchmark. Usage Install dependencies using th

GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

Comments
  • A warning I met during I perform

    A warning I met during I perform "python policy_run.py"

    I have followed each step as suggested by the readme. However, I encounter the problem as follow:

    WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED Traceback (most recent call last): File "policy_run.py", line 14, in env = DummyVecEnv([lambda: Monitor( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in init self.envs = [fn() for fn in env_fns] File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in self.envs = [fn() for fn in env_fns] File "policy_run.py", line 15, in gym.make( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 235, in make return registry.make(id, **kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 129, in make env = spec.make(kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 90, in make env = cls(_kwargs) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 169, in init super(TestEnv, self).init(ip_address, image_shape, env_config) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 19, in init self.setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 174, in setup_flight super(TestEnv, self).setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 36, in setup_flight self.drone.reset() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim\client.py", line 26, in reset self.client.call('reset') File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\session.py", line 41, in call return self.send_request(method, args).get() File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\future.py", line 43, in get raise self._error msgpackrpc.error.TransportError: Retry connection over the limit

    I would be grateful if anyone could tell me how to fix this.

    opened by XiAoSSuper 1
Releases(v1.0.0-windows)
Owner
Bilal Kabas
BSc., Electrical & Electronics Engineering, Undergraduate Researcher: Robotics, Computer Vision, ML & DL
Bilal Kabas
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situation.

Rosefintech 11 Aug 23, 2021
RL algorithm PPO and IRL algorithm AIRL written with Tensorflow.

RL algorithm PPO and IRL algorithm AIRL written with Tensorflow. They have a parallel sampling feature in order to increase computation speed (especially in high-performance computing (HPC)).

Fangjian Li 3 Dec 28, 2021
Independent and minimal implementations of some reinforcement learning algorithms using PyTorch (including PPO, A3C, A2C, ...).

PyTorch RL Minimal Implementations There are implementations of some reinforcement learning algorithms, whose characteristics are as follow: Less pack

Gemini Light 4 Dec 31, 2022
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Automatic, Readable, Reusable, Extendable Machin is a reinforcement library designed for pytorch. Build status Platform Status Linux Windows Supported

Iffi 348 Dec 24, 2022
Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python

Autonomous Ground Vehicle Navigation and Control Simulation Examples in Python THIS PROJECT IS CURRENTLY A WORK IN PROGRESS AND THUS THIS REPOSITORY I

Joshua Marshall 14 Dec 31, 2022
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator

DRL-robot-navigation Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gra

null 87 Jan 7, 2023
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
Self-driving car env with PPO algorithm from stable baseline3

Self-driving car with RL stable baseline3 Most of the project develop from https://github.com/GerardMaggiolino/Gym-Medium-Post Please check it out! Th

Sornsiri.P 7 Dec 22, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022