Repository to run object detection on a model trained on an autonomous driving dataset.

Overview

Autonomous Driving Object Detection on the Raspberry Pi 4

Description of Repository

This repository contains code and instructions to configure the necessary hardware and software for running autonomous driving object detection on the Raspberry Pi 4!

Details of Software and Neural Network Model for Object Detection:

  • Language: Python
  • Framework: TensorFlow Lite
  • Network: SSD MobileNet-V2
  • Training Dataset:Berkely Deep Drive (BBD100K)

The motivation for the Project

The goal of this project was to train a neural network to detect things on the road that an autonomous driving vehicle would see (eg. bus, traffic light, traffic sign, person, bike, truck, motor, car, train, rider). Then to test the trained network on lightweight hardware (i.e. Raspberry PI 4) to see how it performs in terms of processing speed and detection accuracy.

Additional Resources

Source

Reference for Source Code for the Project: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md

Special thanks to Evan from EdjeElectronics for the instructions and the majority of the code used in this project! :)

Results

Vehicle Testing Configuration

Core

  • Raspberry Pi 4 GB
  • Raspberry Pi 5MP Camera (rev 1.3)

Other

  • LED
  • 470 Ohm Resistor
  • Small breadboard
  • GPIO push button
  • 3.5 Amp USB-C Power Supply

This tissue box setup isn't the greatest, but it's what I used to mount the PI on the dashboard of my car. I then used the USB-C cable plugged into the AC outlet of my car while I drove around to record and process footage.

Issues

1.) If you get an error when trying to run the program showing the following:

ImportError: No module named cv2

Try using this tutorial to install and build opencv: https://pimylifeup.com/raspberry-pi-opencv/ The software setup steps should install OpenCV, but sometimes installing it on the Raspberry Pi can be finicky.

Setting Up Software

1.) Clone Repository:

git clone https://github.com/ecd1012/rpi_road_object_detection.git

2.) Change directory to source code:

cd rpi_road_object_detection

3.) Open command prompt and make sure pi is up to date:

sudo apt-get update && sudo apt-get upgrade

4.) Install virtual environment:

sudo pip3 install virtualenv

5.) Make virtual environment:

python3.7 -m venv TFLite-venv

6.) Activate Environment:

source TFLite-venv/bin/activate

7.) Install the dependencies:

bash get_py_requirements.sh

8.) Make sure the camera module is enabled:

sudo raspi-config

9.) Go to Intercae Options and make sure the Pi Camera is enabled.

Setting Up Hardware

10.) Connect a push button to GPIO pin 17. This will be used as input.

Help: https://www.youtube.com/watch?v=BWYy3qZ315U&ab_channel=O%27Reilly

11.) Connect an LED to GPIO PIN 4. This LED will turn on to indicate when the program is running. Make sure you use a resistor with the LED!

Help: https://www.youtube.com/watch?v=3TDJ4FmtGgk&ab_channel=O%27Reilly

12.) Connect Pi Camera Module to Raspberry Pi. Help: https://www.youtube.com/watch?v=0hrF8Wq8SSQ&ab_channel=BINARYUPDATES

Running Detection

15.) After all your hardware and software is configured correctly run the following command:

python TFLite_detection_webcam_loop.py --modeldir=TFLite_model_bbd --output_path=processed_images

Where the --output_path you specify is where you want images saved.

16.) The script will start running and wait for you to press the GPIO input button to start processing the video feed from the camera. Once you press the button, the green LED will turn on and the pi will start feeding and processing the video stream through the neural network. Processed images will be saved to the '--output_path' you specified over the command line.

17.) If you like, make a video out of the images. You can do this with gif making software, video making software, or ffmpeg. Help: https://stackoverflow.com/questions/24961127/how-to-create-a-video-from-images-with-ffmpeg

18.) Enjoy!! :)

Running on Boot

19.) If you want to start running the python script on boot, do the following:

nano ~/.bashrc

And add the following to the end of your .bashrc

#Change directories to where you cloned the repo
cd ~/rpi_road_object_detection
source TFLite-venv/bin/activate
python TFLite_detection_webcam_loop.py --modeldir=TFLite_model_bbd --output_path=processed_images

Then press CTRL+X and Press Y and enter to save.

You might also like...
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

Ever felt tired after preprocessing the dataset, and not wanting to write any code further to train your model? Ever encountered a situation where you wanted to record the hyperparameters of the trained model and able to retrieve it afterward? Models Playground is here to help you do that. Models playground allows you to train your models right from the browser.
Run object detection model on the Raspberry Pi

Using TensorFlow Lite with Python is great for embedded devices based on Linux, such as Raspberry Pi.

Autonomous Perception: 3D Object Detection with Complex-YOLO
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

Annotate datasets with a semi-trained or fully trained YOLOv5 model

YOLOv5 Auto Annotator Annotate datasets with a semi-trained or fully trained YOLOv5 model Prerequisites Ubuntu =20.04 Python =3.7 System dependencie

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Owner
Ethan
Personal Site: https://ethandell.tech/
Ethan
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💹

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 5, 2022