Autonomous Movement from Simultaneous Localization and Mapping

Overview

Autonomous Movement from Simultaneous Localization and Mapping

About us

Built by a group of Clarkson University students with the help from Professor Masudul Imtiaz and his Lab Resources.

Micheal Caracciolo           - Sophomore, ECE Department
Owen Casciotti               - Senior, ECE Department
Chris Lloyd                  - Senior, ECE Department
Ernesto Sola-Thomas          - Freshman, ECE Department
Matthew Weaver               - Sophomore, ECE Department
Kyle Bielby                  - Senior, ECE Department
Md Abdul Baset Sarker        - Graduate Student, ECE Department
Tipu Sultan                  - Graduate Student, ME Department
Masudul Imtiaz               - Professor, Clarkson University ECE Department

This project began in January 2021 and was finished May 5th 2021.

Synopsis

Presenting the development of a Simultaneous Localization and Mapping (SLAM) based Autonomous Navigation system.

Supported Devices:

Jetson AGX
Jetson Nano

Hardware:

Wheelchair
Jetson Development board
Any Arduino
Development Computer to install Jetson Jetpack SDK (For AGX)
One Intel Realsense D415
One Motor controller ()
2 12V Batteries For Motors
2 12V Lipo Batteries for Jetson

Software:

Tensorflow Version: 2.3.1

OpenVSLAM

We will need to install a few different Python 3.8 packages. We recommend using Conda environments as then you will not have to compile a few packages. However, some packages are not available in Conda, for those just install via pip while inside of the appropriate Conda env.

csv
heapq
Jetson.GPIO (Can only be installed on Jetson)
keyboard
matplotlib
msgpack
numpy
scipy (Greater than 1.5.0)
signal
websockets

Initial Setup

OpenVSLAM, Official Documentation

Webserver, Not needed unless want to interface with phone

  • Move the www folder into your /var directory in your root file system.
  • Open up python server files and insert your static IP of your Jetson
  • Run python server.py

Note: There is some example data and maps in the csv format. This format is required to correctly transmit maps/paths to the device that is listening to the server.

Android Phone, APK here

  • Insert the IP wanting to connect to, in this instance, the static IP of the Jetson
  • Build the Java app to your Android Phone

Note: This can only be used if the Webserver is set up and the server.py is on. We recommend to have it be turned on via startup. We do not have this implemented in our current code, but can be easily added. If you plan on using a Android Phone for a Map/Path/End point interface, you will need to edit some lines in /src/main.py and add to send_location.py. This is all untested code currently.

Source Code, ensure you're in the right Conda Environment

  • To use your own map/.msg file from OpenVSLAM, you will need to put it in the /data folder. There are a few options with this, you can either use the raw .msg file which our MapFileUnpacker.py will take care of, or you can create a csv format of 0 and 1's in the format of a map. 0 being unoccupied and 1 being occupied in the Occupancy Grid Map. For even easier storage, you could run MapFileUnpacker.py and have it extract the keyframes into a csv, which then you can use for OLD_main.py or main.py. We recommend to use the map file you created which is in the form of .msg.
  • You can either use OLD_main.py or main.py. OLD_main.py can be ran without having to run the motors on the connected Jetson. This is helpful for debugging and testing before you decide to implement the map onto a Jetson. main.py will ONLY work on a Jetson as it will call JetsonMotorInterface.py which contains Jetson.GPIO libraries which can only be installed on a Jetson.
  • If the Android Phone is set up, you will need to edit main.py to send the start position via send_location.py to the webserver. You will also need to uncomment a few lines so that the current map is sent to the /var/www/html filepath. Then, the phone should be able to send back a end value which calls def main with that end value. Otherwise, def main will run with a predefined end value in code.
  • To set up the pinout, you will need to first build arduino_motor_ctrl.ino onto the Arduino that is connected to the motor controller. You can use virtually any pins on the Arduino, depending on what Arduino you use. Set these pins in the .ino file. Next, we want to set the pins on the Jetson that output the data to the Arduino pins. Set these pins in JetsonMotorInterface.py. Be careful not to use any I2C or USART pins as these cannot be configured as GPIO Output.

Note: To properly run main.py without any issues, it is recommended to follow this so that you do not need to run Sudo for any of the /src files. If you were to run Sudo, you would have a bunch of different libraries and it will not run properly. If you get an Illegal Instruction error, please try to create a Conda environment to run these scripts.

Note: We are using a Sabertooth 2x32 Dual 32A Motor Driver to drive our dual Wheelchair motors. The Arduino also gets it's power from the Motor Driver, but do not connect it there while it is connected to the computer for building.

A few things to be weary of, in the main.py, since we are not using the Localization from VSLAM, we are simulating the created map into a path. This path will run differently depending on how accurate it is and the speed of your motors. We recommend you to scale your room to your map, so you will want to section out your map in code and have a timing ratio to ensure it moves the right distance of "Occupancy Grid Map spaces". This is explained better in the code.

The Reinforcement Learning files inside of /src/RL are purely experimental and do work for training. However, due to time constraints, they have not been polished enough to work with our design. They are published here for any future use as they are completely made open-source.

You might also like...
A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment
Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment

Voronoi Multi_Robot Collaborate Exploration Introduction In the unknown environment, the cooperative exploration of multiple robots is completed by Vo

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics
Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Dataset Cartography Code for the paper Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics at EMNLP 2020. This repository cont

Poisson Surface Reconstruction for LiDAR Odometry and Mapping
Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Poisson Surface Reconstruction for LiDAR Odometry and Mapping Surfels TSDF Our Approach Table: Qualitative comparison between the different mapping te

LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

 COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis

WASP2 (Currently in pre-development): Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis Requ

Owner
null
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 6, 2023
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Paper Title: Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution

HKDnet Paper Title: "Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution" Email: 18186470991@163.

wasteland 11 Nov 12, 2022
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
People movement type classifier with YOLOv4 detection and SORT tracking.

Movement classification The goal of this project would be movement classification of people, in other words, walking (normal and fast) and running. Yo

null 4 Sep 21, 2021
Simultaneous NMT/MMT framework in PyTorch

This repository includes the codes, the experiment configurations and the scripts to prepare/download data for the Simultaneous Machine Translation wi

NLP@Imperial 37 Sep 29, 2022
SIMULEVAL A General Evaluation Toolkit for Simultaneous Translation

SimulEval SimulEval is a general evaluation framework for simultaneous translation on text and speech. Requirement python >= 3.7.0 Installation git cl

Facebook Research 48 Dec 28, 2022
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

null 1 Aug 9, 2022