DLL: Direct Lidar Localization

Related tags

Deep Learning dll
Overview

DLL: Direct Lidar Localization

Summary

This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aerial robots. DLL implements a point cloud to map registration based on non-linear optimization of the distance of the points and the map, thus not requiring features, neither point correspondences. Given an initial pose, the method is able to track the pose of the robot by refining the predicted pose from odometry. The method performs much better than Monte-Carlo localization methods and achieves comparable precision to other optimization-based approaches but running one order of magnitude faster. The method is also robust under odometric errors.

DLL is fully integarted in Robot Operating System (ROS). It follows the general localization apparoch of ROS, DLL makes use of sensor data to compute the transform that better fits the robot odometry TF into the map. Although an odometry system is recommended for fast and accurate localization, DLL also performs well without odometry information if the robot moves smoothly.

DLL experimental results in different setups

Software dependencies

There are not hard dependencies except for Google Ceres Solver and ROS:

Hardware requirements

DLL has been tested in a 10th generation Intel i7 processor, with 16GB of RAM. No graphics card is needed. The optimization is currently configured to be single threaded. You can easily reduce the processing time by a 33% just increasing the number of threads used by Ceres Solver.

Compilation

Download this source code into the src folder of your catkin worksapce:

$ cd catkin_ws/src
$ git clone https://github.com/robotics-upo/dll

Compile the project:

$ cd catkin_ws
$ source devel/setup.bash
$ catkin_make

How to use DLL

You can find several examples into the launch directory. The module needs the following input information:

  • A map of the environment. This map is provided as a .bt file
  • You need to provide an initial position of the robot into the map.
  • base_link to odom TF. If the sensor is not in base_link frame, the corresponding TF from sensor to base_link must be provided.
  • 3D point cloud from the sensor. This information can be provided by a 3D LIDAR or 3D camera.
  • IMU information is used to get roll and pitch angles. If you don't have IMU, DLL will take the roll and pitch estimations from odometry as the truth values.

Once launched, DLL will publish a TF between map and odom that alligns the sensor point cloud to the map.

When a new map is provided, DLL will compute the Distance Field grid. This file will be automatically generated on startup if it does not exist. Once generated, it is stored in the same path of the .bt map, so that it is not needed to be computed in future executions.

As example, you can download 5 datasets from the Service Robotics Laboratory repository (https://robotics.upo.es/datasets/dll/). The example launch files are prepared and configured to work with these bags. You can see the different parameters of the method. Notice that, except for mbzirc.bag, these bags do not include odometry estimation. For this reason, as an easy work around, the lauch files publish a fake odometry that is the identity matrix. DLL is faster and more accurate when a good odometry is available.

Cite

DLL has been accepted for publication in IROS 2021.

F. Caballero and L. Merino. "DLL: Direct LIDAR Localization. A map-based localization approach for aerial robots". Sumbitted to the International Conference on Intelligent Robots and Systems, IROS 2021.

You can download preliminar version of the the paper from arXiv

Comments
  • Using Livox mid 70 get bad result

    Using Livox mid 70 get bad result

    Hi, I use Livox mid 70 with wheel odometry and IMU, but the localization result is not good, the robot pose always "jump" when running. any idea to make a better result (stable, smooth, continues path)

    opened by gongyue666 9
  • Run other datasets

    Run other datasets

    hello!I saved a .ot file in dll/maps. And <arg name="map" default="myown.ot" /> But when I run the program , it shows "NULL otcomap". How come?Where else do I need to set the path?

    opened by MIke-1118 6
  • tested the given bag failed

    tested the given bag failed

    Hi, thanks for your great work! I have download the given bag for test the dll,but when i launched the launch file,it always shows the error,which is : " Octomap loaded Map size: x: 37.2 to 92.75 y: 41.95 to 95.65 z: -10.4 to 0.15 Res: 0.05 Error opening file /home/whx/study/dll_ws/src/dll/maps/airsim.grid for reading Computing 3D occupancy grid. This will take some time... [ INFO] [1640669470.668451692, 1614448809.604375476]: Progress: 0.000000 % [ INFO] [1640669471.163893210, 1614448810.107720910]: Progress: 0.021567 % [ INFO] [1640669471.668560708, 1614448810.612384198]: Progress: 0.039648 % [ INFO] [1640669472.172075265, 1614448811.115887848]: Progress: 0.053874 % [ INFO] [1640669472.680451449, 1614448811.624293216]: Progress: 0.065055 % [ INFO] [1640669473.184041975, 1614448812.127884273]: Progress: 0.073926 % ... ... [bag_player-2] process has finished cleanly log file: /home/whx/.ros/log/5879e12a-679f-11ec-9f57-c0e43482dfff/bag_player-2*.log " I have noticed there is a closed issue which talk about it,so i repeated the same test for many times.But it didn't work.

    I hope someone can help me solve the problem.

    Best wishes

    opened by numb0824 2
  • open map file failed

    open map file failed

    Thanks for your great works! I want to run your code just used roslaunch dll airsim1.launch and changed the true path about the .bag. But I meet the following error Screenshot from 2021-11-30 10-16-11 Could you help me how to solve the problem? Thanks.

    opened by huangsiyuan0717 2
  • Transform of input map

    Transform of input map

    Hello!

    I'd first like to thank you for this work, it's very interesting!

    I have a question regarding the internal representation of the map: when looking through the code I notice that you subtract the minimum values from each axis of the points. I suppose this is relevant for the method? I got some (obviously) poor results when I assumed the input map and internal representation were the same.

    I think it would be nice to make this clearer in the readme, or potentially add some transform between the original map and the internal representation such that the initial position set in the launch file could be relative the original map.

    opened by MartinEekGerhardsen 3
Releases(v1.1)
  • v1.1(Mar 22, 2022)

    Improved memory allocation and solver parameterization

    • Added use_yaw_increments parameter that uses yaw increments from IMU since last LIDAR update as initial guess for the optimizer. This is a good choice when robot performs very fast yaw rotations
    • Added grid trilinear interpolation computation online. This will reduce the DLL memory requirements by a factor of 7 approximatelly
    • Added parameters to set solver max iterations and max threads
    • Added comprehensive message when .grid files is no found
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Mar 22, 2022)

    Initial Commit

    • This version contains the source code related wit the IROS paper detailed in the README
    • Some cleaning has been done to make it simpler to understand
    Source code(tar.gz)
    Source code(zip)
Owner
Service Robotics Lab
Service Robotics, Autonomous Robot Navigation, Machine Learning, Social Robotics
Service Robotics Lab
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

Overlap-based 3D LiDAR Monte Carlo Localization This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D

Photogrammetry & Robotics Bonn 219 Dec 15, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

GDR-Net This repo provides the PyTorch implementation of the work: Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided

null 169 Jan 7, 2023
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
This respository includes implementations on Manifoldron: Direct Space Partition via Manifold Discovery

Manifoldron: Direct Space Partition via Manifold Discovery This respository includes implementations on Manifoldron: Direct Space Partition via Manifo

dayang_wang 4 Apr 28, 2022
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Poisson Surface Reconstruction for LiDAR Odometry and Mapping Surfels TSDF Our Approach Table: Qualitative comparison between the different mapping te

Photogrammetry & Robotics Bonn 305 Dec 21, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 9, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

Pengwei Zhou 183 Dec 1, 2022
This repository is an open-source implementation of the ICRA 2021 paper: Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling.

Locus This repository is an open-source implementation of the ICRA 2021 paper: Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order

Robotics and Autonomous Systems Group 96 Dec 15, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
LiDAR R-CNN: An Efficient and Universal 3D Object Detector

LiDAR R-CNN: An Efficient and Universal 3D Object Detector Introduction This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object

TuSimple 295 Jan 5, 2023
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022