ICRA 2021 - Robust Place Recognition using an Imaging Lidar

Overview

Robust Place Recognition using an Imaging Lidar

A place recognition package using high-resolution imaging lidar. For best performance, a lidar equipped with more than 64 uniformly distributed channels is strongly recommended, i.e., Ouster OS1-128 lidar.

drawing


Dependency

  • ROS
  • DBoW3
    cd ~/Downloads/
    git clone https://github.com/rmsalinas/DBow3.git
    cd ~/Downloads/DBow3/
    mkdir build && cd build
    cmake -DCMAKE_BUILD_TYPE=Release ..
    sudo make install
    

Install Package

Use the following commands to download and compile the package.

cd ~/catkin_ws/src
git clone https://github.com/TixiaoShan/imaging_lidar_place_recognition.git
cd ..
catkin_make

Notes

Download

The three datasets used in the paper can be downloaded from from Google Drive. The lidar used for data-gathering is Ouster OS1-128.

https://drive.google.com/drive/folders/1G1kE8oYGKj7EMdjx7muGucXkt78cfKKU?usp=sharing

Point Cloud Format

The author defined a customized point cloud format, PointOuster, in parameters.h. The customized point cloud is projected onto various images in image_handler.h. If you are using your own dataset, please modify these two files to accommodate data format changes.

Visualization logic

In the current implementation, the package subscribes to a path message that is published by a SLAM framework, i.e., LIO-SAM. When a new point cloud arrives, the package associates the point cloud with the latest pose in the path. If a match is detected between two point clouds, an edge marker is plotted between these two poses. The reason why it's implemented in this way is that SLAM methods usually suffer from drift. If a loop-closure is performed, the associated pose of a point cloud also needs to be updated. Thus, this visualization logic can update point clouds using the updated path rather than using TF or odometry that cannot be updated later.

Image Crop

It's recommended to set the image_crop parameter in params.yaml to be 196-256 when testing the indoor and handheld datasets. This is because the operator is right behind the lidar during the data-gathering process. Using features extracted from the operator body may cause unreliable matching. This parameter should be set to 0 when testing the Jackal dataset, which improves the reverse visiting detection performance.


Test Package

  1. Run the launch file:
roslaunch imaging_lidar_place_recognition run.launch
  1. Play existing bag files:
rosbag play indoor_registered.bag -r 3

Paper

Thank you for citing our paper if you use any of this code or datasets.

@inproceedings{robust2021shan,
  title={Robust Place Recognition using an Imaging Lidar},
  author={Shan, Tixiao and Englot, Brendan and Duarte, Fabio and Ratti, Carlo and Rus Daniela},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  pages={to-be-added},
  year={2021},
  organization={IEEE}
}

Acknowledgement

  • The point clouds in the provided datasets are registered using LIO-SAM.
  • The package is heavily adapted from Vins-Mono.
You might also like...
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

Pytorch code for ICRA'21 paper:
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Adversarial-Information-Bottleneck - Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck (NeurIPS21) Official PyTorch implementation of
Official PyTorch implementation of "IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos", CVPRW 2021

IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos Introduction This repo is official PyTorch implementatio

Comments
  • Reverse visiting detection performance in KITTI dataset 08

    Reverse visiting detection performance in KITTI dataset 08

    Hi, @TixiaoShan Thanks for sharing your great work, I have test the method using KITTI dataset. The results show that reverse visiting detection performance is not well while the forward direction is pretty good, do you have any ideas about improving the reverse visiting recall?

    Screenshot from 2021-04-23 16-02-10 Screenshot from 2021-04-23 15-55-25

    opened by kinggreat24 6
  • Process died

    Process died

    Hello, thank you for sharing you work. When I run the launch file, it show

    [imaging_lidar_place_recognition_main-2] process has died [pid 20188, exit code -11, cmd /home/yunqi/snap/SLAM/loop_closure/Imaging/devel/lib/imaging_lidar_place_recognition/imaging_lidar_place_recognition_main __name:=imaging_lidar_place_recognition_main __log:=/home/yunqi/.ros/log/d43f6938-7f32-11ec-a38d-a87d12044727/imaging_lidar_place_recognition_main-2.log]. log file: /home/yunqi/.ros/log/d43f6938-7f32-11ec-a38d-a87d12044727/imaging_lidar_place_recognition_main-2*.log

    How can I solve it? One possible reason might be the version of OpenCV. I compile with openCV4 and it shows some warning

    /usr/bin/ld: warning: libopencv_video.so.3.2, needed by /usr/lib/x86_64-linux-gnu/libopencv_shape.so, may conflict with libopencv_video.so.4.2 /usr/bin/ld: warning: libopencv_videoio.so.3.2, needed by /usr/lib/x86_64-linux-gnu/libopencv_superres.so, may conflict with libopencv_videoio.so.4.2 /usr/bin/ld: warning: libopencv_photo.so.3.2, needed by /usr/lib/x86_64-linux-gnu/libopencv_videostab.so, may conflict with libopencv_photo.so.4.2 /usr/bin/ld: warning: libopencv_calib3d.so.3.2, needed by /usr/lib/x86_64-linux-gnu/libopencv_videostab.so, may conflict with libopencv_calib3d.so.4.2 /usr/bin/ld: warning: libopencv_features2d.so.3.2, needed by /usr/lib/x86_64-linux-gnu/libopencv_videostab.so, may conflict with libopencv_features2d.so.4.2 /usr/bin/ld: warning: libopencv_dnn.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_dnn.so.4.2 /usr/bin/ld: warning: libopencv_highgui.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_highgui.so.4.2 /usr/bin/ld: warning: libopencv_ml.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_ml.so.4.2 /usr/bin/ld: warning: libopencv_objdetect.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_objdetect.so.4.2 /usr/bin/ld: warning: libopencv_shape.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_shape.so.3.2 /usr/bin/ld: warning: libopencv_shape.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_shape.so.3.2 /usr/bin/ld: warning: libopencv_stitching.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_stitching.so.4.2 /usr/bin/ld: warning: libopencv_superres.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_superres.so.3.2 /usr/bin/ld: warning: libopencv_superres.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_superres.so.3.2 /usr/bin/ld: warning: libopencv_videostab.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_videostab.so.3.2 /usr/bin/ld: warning: libopencv_videostab.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_videostab.so.3.2 /usr/bin/ld: warning: libopencv_viz.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_viz.so.3.2 /usr/bin/ld: warning: libopencv_viz.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_viz.so.3.2 /usr/bin/ld: warning: libopencv_calib3d.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_calib3d.so.4.2 /usr/bin/ld: warning: libopencv_features2d.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_features2d.so.4.2 /usr/bin/ld: warning: libopencv_flann.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_flann.so.4.2 /usr/bin/ld: warning: libopencv_photo.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_photo.so.4.2 /usr/bin/ld: warning: libopencv_video.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_video.so.4.2 /usr/bin/ld: warning: libopencv_videoio.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_videoio.so.4.2 /usr/bin/ld: warning: libopencv_imgcodecs.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_imgcodecs.so.3.2 /usr/bin/ld: warning: libopencv_imgcodecs.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_imgcodecs.so.4.2 /usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_imgproc.so.3.2 /usr/bin/ld: warning: libopencv_imgproc.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_imgproc.so.4.2 /usr/bin/ld: warning: libopencv_core.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_core.so.3.2 /usr/bin/ld: warning: libopencv_core.so.3.4, needed by /usr/local/lib/gcc/x86_64-linux-gnu/9.2.0/../../../libDBoW3.so, may conflict with libopencv_core.so.4.2 /usr/bin/ld: warning: libopencv_flann.so.3.2, needed by //usr/lib/x86_64-linux-gnu/libopencv_calib3d.so.3.2, may conflict with libopencv_flann.so.4.2

    What's your version of openCV

    opened by smalltheater 1
  • Reproducing intensity image on indoor dataset

    Reproducing intensity image on indoor dataset

    I am trying to extract the intensity image of the indoor dataset, but am running into issues.

    This is the result of running your code on the indoor dataset

    This is the result of running your code on your rooftoop dataset (from LIO-SAM), which looks great to me! That bagfile does not include ring, time or noise data, however. Must be an ouster driver issue?

    Any ideas where the problem could be? Thanks.

    opened by juliangaal 0
  • Point Cloud Format question

    Point Cloud Format question

    Hi,author, Thanks for sharing your great work! I have adjust the params.yaml to fit my OS1-64 laser,image_width: 1024, image_height: 64 and the cloud_topic, But I can't get a good image_intensity like your demo or parper, It looks like there is a offset in different rows. Is your data preprocessed ? Or there is something wrong with my params.yaml? image thanks for your answering!

    opened by chengwei0427 0
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 9, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 136 Jan 3, 2023
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 3, 2022
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

null 232 Dec 25, 2022
Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

spatial-intention-maps This code release accompanies the following paper: Spatial Intention Maps for Multi-Agent Mobile Manipulation Jimmy Wu, Xingyua

Jimmy Wu 70 Jan 2, 2023
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

null 47 Jun 30, 2022
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 8, 2022
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese

Aleksandr Kim 276 Dec 30, 2022
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022