RRxIO - Robust Radar Visual/Thermal Inertial Odometry: Robust and accurate state estimation even in challenging visual conditions.

Related tags

Deep Learning rrxio
Overview

RRxIO - Robust Radar Visual/Thermal Inertial Odometry

RRxIO offers robust and accurate state estimation even in challenging visual conditions. RRxIO combines radar ego velocity estimates and Visual Inertial Odometry (VIO) or Thermal Inertial Odometry (TIO) in a single filter by extending rovio. Thus, state estimation in challenging visual conditions (e.g. darkness, direct sunlight, fog) or challenging thermal conditions (e.g. temperature gradient poor environments or outages caused by non uniformity corrections) is possible. In addition, the drift free radar ego velocity estimates reduce scale errors and the overall accuracy as compared to monocular VIO/TIO. RRxIO runs many times faster than real-time on an Intel NUC i7 and achieves real-time on an UpCore embedded computer.

Cite

If you use RRxIO for your academic research, please cite our related paper:

@INPROCEEDINGS{DoerIros2021,
  author={Doer, Christopher and Trommer, Gert F.},
  booktitle={2021 IEEE/RSJ International Conference on Intelligent Rotots and Sytems (IROS)}, 
  title={Radar Visual Inertial Odometry and Radar Thermal Inertial Odometry: Robust Navigation even in Challenging Visual Conditions}, 
  year={2021}}

Demo Result: IRS Radar Thermal Visual Inertial Datasets IROS 2021

Motion Capture Lab (translational RMSE (ATE [m]))

image

Indoor and Outdoors (translational RMSE (ATE [m]))

image

Runtime (Real-time factor)

image

Getting Started

RRxIO depends on:

Additional dependencies are required to run the evaluation framework:

  • sudo apt-get install texlive-latex-extra texlive-fonts-recommended dvipng cm-super
  • pip2 install -U PyYAML colorama ruamel.yaml==0.15.0

The following dependencies are included via git submodules (run once upon setup: git submodule update --init --recursive):

Build in Release is highly recommended:

catkin build rrxio --cmake-args -DCMAKE_BUILD_TYPE=Release

Run Demos

Download the IRS Radar Thermal Visual Inertial Datasets IROS 2021 datasets.

Run the mocap_easy datasets with visual RRxIO:

roslaunch rrxio rrxio_visual_iros_demo.launch rosbag_dir:=<path-to-rtvi_datastets_iros_2021> rosbag:=mocap_easy

Run the outdoor_street datasets with thermal RRxIO:

roslaunch rrxio rrxio_thermal_iros_demo.launch rosbag_dir:=<path-to-rtvi_datastets_iros_2021> rosbag:=outdoor_street

Run Evaluation IRS Radar Thermal Visual Inertial Datasets IROS 2021

The evaluation script is also provided which does an extensive evaluation of RRxIO_10, RRxIO_15, RRxIO_25 on all IRS Radar Thermal Visual Inertial Datasets IROS 2021 datasets:

rosrun rrxio evaluate_iros_datasets.py <path-to-rtvi_datastets_iros_2021>

After some time, the results can be found at <path-to-rtvi_datastets_iros_2021>/results/evaluation/<10/15/25>/evaluation_full_align. These results are also shown in the table above.

You might also like...
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Finite difference solution of 2D Poisson equation. Can handle Dirichlet, Neumann and mixed boundary conditions.
Finite difference solution of 2D Poisson equation. Can handle Dirichlet, Neumann and mixed boundary conditions.

Poisson-solver-2D Finite difference solution of 2D Poisson equation Current version can handle Dirichlet, Neumann, and mixed (combination of Dirichlet

 Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning
Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library stable-baselines3 to derive a control policy that maximizes melt pool depth consistency.

Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

A Moonraker plug-in for real-time compensation of frame thermal expansion

Frame Expansion Compensation A Moonraker plug-in for real-time compensation of frame thermal expansion. Installation Credit to protoloft, from whom I

A lightweight deep network for fast and accurate optical flow estimation.
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

Comments
  • How to start it

    How to start it

    `SUMMARY

    CLEAR PARAMETERS

    • /rrxio/

    PARAMETERS

    • /odom_to_path_rrxio_visual/filter_name: RRxIO Visual
    • /odom_to_path_rrxio_visual/topic_odom: /rrxio/rovio/odom...
    • /plot_states_uncertainty/topic_prefix: /rrxio/rovio/
    • /rosdistro: melodic
    • /rosversion: 1.14.12
    • /rrxio/N_ransac_points: 3
    • /rrxio/allowed_outlier_percentage: 0.25
    • /rrxio/azimuth_thresh_deg: 60
    • /rrxio/bag_dur: 10000
    • /rrxio/bag_start: 0
    • /rrxio/cam0_topic_name: /sensor_platform/...
    • /rrxio/cam1_topic_name: cam1_not_used
    • /rrxio/camera0_config: /home/linux/datab...
    • /rrxio/camera_frame: camera
    • /rrxio/elevation_thresh_deg: 60
    • /rrxio/filter_config: /home/linux/catki...
    • /rrxio/filter_max_z: 100
    • /rrxio/filter_min_z: -100
    • /rrxio/imu_frame: base_link
    • /rrxio/imu_topic_name: /sensor_platform/imu
    • /rrxio/inlier_thresh: 0.25
    • /rrxio/l_b_r_x: 0.0
    • /rrxio/l_b_r_y: 0.03
    • /rrxio/l_b_r_z: 0.06
    • /rrxio/max_dist: 100
    • /rrxio/max_frame_ctr: 2000000
    • /rrxio/max_r_cond: 1000
    • /rrxio/max_sigma_x: 0.5
    • /rrxio/max_sigma_y: 0.5
    • /rrxio/max_sigma_z: 0.5
    • /rrxio/min_db: 5
    • /rrxio/min_dist: 0.25
    • /rrxio/outlier_prob: 0.4
    • /rrxio/q_b_r_w: 0.00524
    • /rrxio/q_b_r_x: 0.69946
    • /rrxio/q_b_r_y: 0.71461
    • /rrxio/q_b_r_z: 0.00723
    • /rrxio/radar_velocity_correction_factor: 1.0
    • /rrxio/rosbag_filename: /home/linux/datab...
    • /rrxio/sigma_offset_radar_x: 0.1
    • /rrxio/sigma_offset_radar_y: 0.05
    • /rrxio/sigma_offset_radar_z: 0.1
    • /rrxio/sigma_v_d: 0.125
    • /rrxio/sigma_zero_velocity_x: 0.025
    • /rrxio/sigma_zero_velocity_y: 0.025
    • /rrxio/sigma_zero_velocity_z: 0.025
    • /rrxio/success_prob: 0.999999
    • /rrxio/thresh_zero_velocity: 0.05
    • /rrxio/timeshift_cam_imu: -0.004
    • /rrxio/topic_radar_scan: /sensor_platform/...
    • /rrxio/topic_radar_trigger: /sensor_platform/...
    • /rrxio/topic_vel: not_used
    • /rrxio/use_cholesky_instead_of_bdcsvd: True
    • /rrxio/use_odr: True
    • /rrxio/use_ransac: True
    • /rrxio/world_frame: odom

    NODES / odom_to_path_rrxio_visual (rrxio/odom_to_path.py) plot_states_uncertainty (rrxio/plot_states_uncertainty.py) rrxio (rrxio/rrxio_rosbag_loader_10) rviz (rviz/rviz)

    ROS_MASTER_URI=http://localhost:11311

    process[rrxio-1]: started with pid [4360] process[odom_to_path_rrxio_visual-2]: started with pid [4362] process[plot_states_uncertainty-3]: started with pid [4363] process[rviz-4]: started with pid [4364] [ INFO] [1656057227.880663612]: RRxIO started`

    Hello, i successfully run the code using roslaunch rrxio rrxio_thermal_iros_demo.launch rosbag_dir:=/home/linux/database/irs_rtvi rosbag:=outdoor_street N:=10 but nothing happened, the terminal stuck at "RRxIO started" what should i do? Did i do somthing wrong?How can i start the VIO?

    opened by BlueAkoasm 5
  • Error when using custom dataset

    Error when using custom dataset

    opened by Ericwen2001 1
Owner
Christopher Doer
Christopher Doer
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 6 Dec 31, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
Fuse radar and camera for detection

SAF-FCOS: Spatial Attention Fusion for Obstacle Detection using MmWave Radar and Vision Sensor This project hosts the code for implementing the SAF-FC

ChangShuo 18 Jan 1, 2023