MS in Data Science capstone project. Studying attacks on autonomous vehicles.

Overview

Surveying Attack Models for CAVs

Guide to Installing CARLA and Collecting Data

Our project focuses on surveying attack models for Connveced Autonomous Vehicles (CAVs). The primary tool we will be using throughout the project is CARLA, a vehicle simulation platform. This document serves a guide to how to get CARLA running on your system as well as show off a script we adapted from orginal CARLA install. This closely follows the quick installation from the CARLA leaderboard challenge.

Requirements

CARLA runs best for Ubuntu 18.04 and for Windows. The following requirements are for an Ubuntu system:

  • Python 3
  • Anaconda
  • pip installer
  • 6 GB GPU
  • 20 GB of disk space

Installation

  1. Download the CARLA 0.9.10.1 release found here and unzip the package into a folder named CARLA.

  2. Download leaderboard repo:

    git clone -b stable --single-branch https://github.com/carla-simulator/leaderboard.git

  3. Switch to your CARLA root directory and run the following:

    pip3 install -r requirements.txt

  4. Clone the scenario runner repo

    git clone -b leaderboard --single-branch https://github.com/carla-simulator/scenario_runner.git

  5. cd into the scenario runner directory and run the following:

    pip3 install -r requirements.txt

  6. Now the environment variables need to be defined. Open a fresh terminal and open the bash profile:

    gedit ~/.bashrc

  7. Add the following to the bash profile:

export CARLA_ROOT=PATH_TO_CARLA_ROOT export SCENARIO_RUNNER_ROOT=PATH_TO_SCENARIO_RUNNER export LEADERBOARD_ROOT=PATH_TO_LEADERBOARD export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}

  1. source so the changes can take place:

    source ~/.bashrc

Running CARLA

To run CARLA simply open the terminal, change into your CARLA directory and run the following:

./CarlaUE4.sh

This should open an environment that looks like this:

Collecting Data

For our project, we are primary concerned with collecting LiDAR, Radar, Camera and GPS data. Thus far, we have a basic test python script that will log the latitude and longitude of an vehicle placed in the environment.

The test script can be found under tutorials/alana_test.py. This script should be placed under your CARLA root directory in PythonAPI.

The output will be .csv, a sample of which can be found under tutorials/outputgnss.csv. It should look like:

time latitude longitude altitude
8.076389706286136 0.001497075674478765 0.0013884266232332388 2.5777945518493652
12.463837534713093 0.0015216552946100137 0.0013888545621243858 2.003244638442993
16.73477230645949 0.0015536632594717048 0.0013894208066981615 1.2948381900787354
21.952862125413958 0.001598166573884896 0.0013902203478743502 0.5753694176673889
27.885887853393797 0.001605972963332647 0.001390361257925631 0.47810444235801697
You might also like...
 2019 Data Science Bowl
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python 📊

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.
Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

Educational project on how to build an ETL (Extract, Transform, Load) data pipeline, orchestrated with Airflow.
Techdegree Data Analysis Project 2

Basketball Team Stats Tool In this project you will be writing a program that reads from the "constants" data (PLAYERS and TEAMS) in constants.py. Thi

Python Project on Pro Data Analysis Track

Udacity-BikeShare-Project: Python Project on Pro Data Analysis Track Basic Data Exploration with pandas on Bikeshare Data Basic Udacity project using

Owner
Isabela Caetano
Masters student studying data science
Isabela Caetano
A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful.

How useful is the aswer? A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful. If you want to l

null 1 Dec 17, 2021
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

Tuplex 791 Jan 4, 2023
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Trung-Duy Nguyen 27 Nov 1, 2022
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Orchest is a browser based IDE for Data Science.

Orchest is a browser based IDE for Data Science. It integrates your favorite Data Science tools out of the box, so you don’t have to. The application is easy to use and can run on your laptop as well as on a large scale cloud cluster.

Orchest 3.6k Jan 9, 2023
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
Lale is a Python library for semi-automated data science.

Lale is a Python library for semi-automated data science. Lale makes it easy to automatically select algorithms and tune hyperparameters of pipelines that are compatible with scikit-learn, in a type-safe fashion.

International Business Machines 293 Dec 29, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Improving your data science workflows with

Make Better Defaults Author: Kjell Wooding [email protected] This is the git repo for Makefiles: One great trick for making your conda environments mo

Kjell Wooding 18 Dec 23, 2022
Open source platform for Data Science Management automation

Hydrosphere examples This repo contains demo scenarios and pre-trained models to show Hydrosphere capabilities. Data and artifacts management Some mod

hydrosphere.io 6 Aug 10, 2021