A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.

Overview

Omni-swarm

A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm

Introduction

Omni-swarm is a decentralized omnidirectional visual-inertial-UWB state estimation system for the aerial swarm. In order to solve the issues of observability, complicated initialization, insufficient accuracy and lack of global consistency, we introduce an omnidirectional perception system as the front-end of the Omni-swarm, consisting of omnidirectional sensors, which includes stereo fisheye cameras and ultra-wideband (UWB) sensors, and algorithms, which includes fisheye visual inertial odometry (VIO), multi-drone map-based localization and visual object detection. A graph-based optimization and forward propagation working as the back-end of the Omni-swarm to fuse the measurements from the front-end. According to the experiment result, the proposed decentralized state estimation method on the swarm system achieves centimeter-level relative state estimation accuracy while ensuring global consistency. Moreover, supported by the Omni-swarm, inter-drone collision avoidance can be accomplished in a whole decentralized scheme without any external device, demonstrating the potential of Omni-swarm to be the foundation of autonomous aerial swarm flights in different scenarios.

The structure of Omni-swarm is

The fused measurements of Omni-swarm:

The detailed backend structure of state estimation of Omni-swarm:

Usage

The Omni-swarm offical support TX2 with Ubuntu 18.04. For those running on other hardware and system setup, converting the models to trt by your own is essential.

Here to download the CNN models for Omni-swarm and extract it to swarm_loop folder.

Here to get the raw and preprocessed offical omni-directional and pinole dataset.

swarm_msgs inf_uwb_ros are compulsory. And swarm_detector if you want to use detector,

First, running the pinhole or fisheye version of VINS-Fisheye (Yes, VINS-Fisheye is pinhole compatiable and is essential for Omni-swarm).

Start map-based localization with

roslaunch swarm_loop nodelet-sfisheye.launch

or pinhole version

roslaunch swarm_loop realsense.launch

Start visual object detector by (not compulsory)

roslaunch swarm_detector detector.launch

Start UWB communication module with (Support NoopLoop UWB module only)

roslaunch localization_proxy uwb_comm.launch start_uwb_node:=true

If you don't have a UWB module, you may start the communication with a self id(start from 1, different on each drone)

roslaunch localization_proxy uwb_comm.launch start_uwb_node:=true enable_uwb:=false self_id:=1

Start state estimation with visualizer by

roslaunch swarm_localization loop-5-drone.launch bag_replay:=true viz:=true enable_distance:=false cgraph_path:=/home/your-name/output/graph.dot

You may enable/disable specific measurement by adding

enable_distance:=false or enable_detection:=false enable_loop:=true

To visualize the real-time estimation result, use viz:=true. Add bag_replay:=true only when evaluation dataset, when evaluate pre-processed dataset, you may only launch loop-5-drone.launch Some analysis tools is located in DataAnalysis

LICENSE

GPLV3

Comments
  • Dependencies of this project

    Dependencies of this project

    Hi @xuhao1 thanks for your works and sharing the code.

    what are the dependencies of this project? I got some errors when I compile the project. error1: bspline/Bspline.h: No such file or directory error2: tensorrt_generic.h:22:5: error: ‘Logger’ does not name a type error3: faiss/IndexFlat.h: No such file or directory, faiss required? I want to test the code on one robot, maybe vins-fisheye + swarm-loop is enough? Is there a simple solution for vins-fisheye + loop closure? many thanks!

    opened by wolf943134497 2
  • Hope some suggestions about workflow

    Hope some suggestions about workflow

    Hello! I am appreciate your project of omni-swarm. I am also setup my multi-uav system. And I have some issues about developing workflow.

    1. NVIDIA NX (8G) compiling a VIO project is too time-consuming, even the computer would get stuck. I usually test my code on desktop and then deploy the code on NX by gitee git pull && catkin_make, but the catkin_make will cost too much time. If I turn to using docker img, it will be more faster ? (I know this from your TRO paper)

    2. I find when I rosbag record intel Realsense D455 images on NX, some images are lost and the topic frequence will drop from 30 Hz to 8~10 Hz, I think this problem due to the low computering resource of NX, my laptop seldom records image topics at a such low frequence. Now I am using [email protected] method to rosbag record. but the problem still exist. Is that a issue when you work on it?

    opened by myboyhood 2
  • unexpected EOF error when used Docker to pull images

    unexpected EOF error when used Docker to pull images

    Thanks for your contribution, here is my system info.

    Ubuntu 20.04 x86_64 CPU: AMD Ryzen 7 5800H Memory: 16G GPU: RTX 3060 Docker version: 20.10.17

    The error info,

    $ docker pull xuhao1/swarm2020:pc
    pc: Pulling from xuhao1/swarm2020
    171857c49d0f: Pulling fs layer 
    419640447d26: Pulling fs layer 
    61e52f862619: Pull complete 
    931f651eb167: Pull complete 
    c2120eda0e96: Pull complete 
    1929b76c1b4e: Pull complete 
    e610187a9606: Pull complete 
    b73960743501: Pull complete 
    becb4b41f71f: Pull complete 
    5cd8afab3fa5: Pull complete 
    0a1612a4b605: Pull complete 
    17acd5442430: Pull complete 
    6f5b134c3039: Pull complete 
    fea7c50eb593: Pull complete 
    45b87c797aa8: Pull complete 
    a3cf0850ba09: Pull complete 
    86cfe54fa447: Pull complete 
    a252564d7b1f: Pull complete 
    84c040372501: Pull complete 
    7889b84da9ed: Pull complete 
    7a2d5080979f: Pull complete 
    b35c2af04ae7: Pull complete 
    4ccf039ea0b1: Pull complete 
    23ff403351f8: Pull complete 
    4f4fb700ef54: Pull complete 
    2cf59c32e49c: Pull complete 
    105b8b24e954: Pull complete 
    6c0f4a75760e: Pull complete 
    40bdb53f1f5c: Pull complete 
    0d75829216be: Pull complete 
    c272198afe55: Pull complete 
    bf5873044c7a: Pull complete 
    2a21507b2ad5: Pull complete 
    cdca5f059023: Pull complete 
    7407bda83046: Pull complete 
    d6d2f83ce920: Pull complete 
    009aaf6011f6: Pull complete 
    2dc3936352b3: Pull complete 
    999acb71c4df: Pull complete 
    a039ec5ed3d9: Pull complete 
    f0ffbc7e0eda: Pull complete 
    11096700ede2: Pull complete 
    0a6de8981ce9: Pull complete 
    645c26c58de3: Pull complete 
    66137f313659: Pull complete 
    9e41e81ab63d: Pull complete 
    2af89813c6cb: Extracting   10.9GB/10.9GB
    94f645ba98af: Download complete 
    cb1585f0b01c: Download complete 
    b824164902a2: Download complete 
    8441b0015451: Download complete 
    9c7d3dec62e1: Download complete 
    d8a45357fa31: Download complete 
    0ae20cfdf9ff: Download complete 
    f20459387df8: Download complete 
    c3b8d2fab945: Download complete 
    a515f42fca61: Download complete 
    c953680ff240: Download complete 
    pc: Pulling from xuhao1/swarm2020
    171857c49d0f: Pulling fs layer 
    419640447d26: Pulling fs layer 
    171857c49d0f: Downloading  1.129MB
    931f651eb167: Downloading  77.32MB/77.32MB
    c2120eda0e96: Downloading  59.78MB/59.78MB
    1929b76c1b4e: Download complete 
    e610187a9606: Download complete 
    b73960743501: Download complete 
    becb4b41f71f: Download complete 
    5cd8afab3fa5: Download complete 
    0a1612a4b605: Download complete 
    17acd5442430: Download complete 
    6f5b134c3039: Download complete 
    fea7c50eb593: Downloading  2.585GB/2.585GB
    45b87c797aa8: Download complete 
    a3cf0850ba09: Download complete 
    86cfe54fa447: Download complete 
    a252564d7b1f: Download complete 
    84c040372501: Download complete 
    7889b84da9ed: Download complete 
    7a2d5080979f: Download complete 
    b35c2af04ae7: Download complete 
    4ccf039ea0b1: Download complete 
    23ff403351f8: Download complete 
    4f4fb700ef54: Download complete 
    2cf59c32e49c: Download complete 
    105b8b24e954: Download complete 
    6c0f4a75760e: Download complete 
    40bdb53f1f5c: Download complete 
    0d75829216be: Download complete 
    c272198afe55: Download complete 
    bf5873044c7a: Download complete 
    2a21507b2ad5: Download complete 
    cdca5f059023: Download complete 
    7407bda83046: Download complete 
    d6d2f83ce920: Download complete 
    009aaf6011f6: Download complete 
    2dc3936352b3: Download complete 
    999acb71c4df: Download complete 
    a039ec5ed3d9: Download complete 
    f0ffbc7e0eda: Download complete 
    11096700ede2: Download complete 
    0a6de8981ce9: Download complete 
    645c26c58de3: Download complete 
    66137f313659: Download complete 
    9e41e81ab63d: Download complete 
    2af89813c6cb: Download complete 
    94f645ba98af: Download complete 
    cb1585f0b01c: Download complete 
    b824164902a2: Download complete 
    8441b0015451: Download complete 
    9c7d3dec62e1: Download complete 
    d8a45357fa31: Download complete 
    0ae20cfdf9ff: Download complete 
    f20459387df8: Download complete 
    c3b8d2fab945: Download complete 
    a515f42fca61: Download complete 
    c953680ff240: Download complete 
    unexpected EOF
    

    I try this cmd many times, next what should I do?

    I try to change the mirror, such as

    ...
    Registry Mirrors:
      https://hub-mirror.c.163.com/
      https://mirror.baidubce.com/
    ...
    

    and maybe my storage is low, only 30G capacity left.

    I try no docker way and also encountered difficulties in config VINS-Fisheye, hope the docker can help me to reduce the difficulty.

    opened by curious-energy 2
  • Non-docker configuration settings

    Non-docker configuration settings

    Hello, Thank you for your contribution with Omni-Swarm!

    I wanna ask about the proper dataset configurations for using your bag files? I have successfully built omni-swarm packages (Including its dependencies) without the docker image and I'd appreciate if you can share what node and which configurations files needed to be launched.

    Thank you very much!

    opened by korizona 2
  • Dependencies

    Dependencies

    Hi,

    I found your package very interesting and I want to run it. But I could not find any install script for all dependencies or instruction how to build it. So I start to figure out by myself, but run to issue with your bspline message type. It is used in inf_uwb_ros package with property drone_id, which in FUEL package this message type does not contain any drone_id property. I tried to build it on Ubuntu 20.04 and ROS noetic, but above issue is not related to system incompatibility.

    If you could deliver some build instruction, dependencies install script or docker file, that would be awesome. If I can help you with that, just let me know.

    Thanks, Kuba

    opened by RocketFan 2
Owner
HKUST Aerial Robotics Group
HKUST Aerial Robotics Group
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

null 79 Jan 6, 2023
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
Omnidirectional Scene Text Detection with Sequential-free Box Discretization (IJCAI 2019). Including competition model, online demo, etc.

Box_Discretization_Network This repository is built on the pytorch [maskrcnn_benchmark]. The method is the foundation of our ReCTs-competition method

Yuliang Liu 266 Nov 24, 2022
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

Thomas Pönitz 12 Nov 22, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

ETHZ V4RL 70 Dec 22, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle ?? PaddlePaddle Visual Transformers (PaddleViT or

null 1k Dec 28, 2022
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch

?? Flamingo - Pytorch Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net, in Pytorch. It will include the p

Phil Wang 630 Dec 28, 2022
Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences 1. Introduction This project is for paper Model-free Vehicle Tracking and St

TuSimple 92 Jan 3, 2023
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles This project is for the paper: Detecting Errors and Estimating

Jiefeng Chen 13 Nov 21, 2022
Implementation of the state of the art beat-detection, downbeat-detection and tempo-estimation model

The ISMIR 2020 Beat Detection, Downbeat Detection and Tempo Estimation Model Implementation. This is an implementation in TensorFlow to implement the

Koen van den Brink 1 Nov 12, 2021
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Learning Calibrated-Guidance for Object Detection in Aerial Images

Learning Calibrated-Guidance for Object Detection in Aerial Images arxiv We propose a simple yet effective Calibrated-Guidance (CG) scheme to enhance

null 51 Sep 22, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023