LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

Overview

LVI-SAM

This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono at a system level.

drawing


Dependency

  • ROS (Tested with kinetic and melodic)
  • gtsam (Georgia Tech Smoothing and Mapping library)
    wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip
    cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/
    cd ~/Downloads/gtsam-4.0.2/
    mkdir build && cd build
    cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..
    sudo make install -j4
    
  • Ceres (C++ library for modeling and solving large, complicated optimization problems)
    sudo apt-get install -y libgoogle-glog-dev
    sudo apt-get install -y libatlas-base-dev
    wget -O ~/Downloads/ceres.zip https://github.com/ceres-solver/ceres-solver/archive/1.14.0.zip
    cd ~/Downloads/ && unzip ceres.zip -d ~/Downloads/
    cd ~/Downloads/ceres-solver-1.14.0
    mkdir ceres-bin && cd ceres-bin
    cmake ..
    sudo make install -j4
    

Compile

You can use the following commands to download and compile the package.

cd ~/catkin_ws/src
git clone https://github.com/TixiaoShan/LVI-SAM.git
cd ..
catkin_make

Datasets

drawing

The datasets used in the paper can be downloaded from Google Drive. The data-gathering sensor suite includes: Velodyne VLP-16 lidar, FLIR BFS-U3-04S2M-CS camera, MicroStrain 3DM-GX5-25 IMU, and Reach RS+ GPS.

https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing

Note that the images in the provided bag files are in compressed format. So a decompression command is added at the last line of launch/module_sam.launch. If your own bag records the raw image data, please comment this line out.

drawing drawing


Run the package

  1. Configure parameters:
Configure sensor parameters in the .yaml files in the ```config``` folder.
  1. Run the launch file:
roslaunch lvi_sam run.launch
  1. Play existing bag files:
rosbag play handheld.bag 

Paper

Thank you for citing our paper if you use any of this code or datasets.

@inproceedings{lvisam2021shan,
  title={LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping},
  author={Shan, Tixiao and Englot, Brendan and Ratti, Carlo and Rus Daniela},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  pages={to-be-added},
  year={2021},
  organization={IEEE}
}

Acknowledgement

  • The visual-inertial odometry module is adapted from Vins-Mono.
  • The lidar-inertial odometry module is adapted from LIO-SAM.
Comments
  • Some problems about LVI SAM data evaluation

    Some problems about LVI SAM data evaluation

    Hello, I have a few questions to ask, thank you:

    1. I output GPS data and IMU from the handheld data set_ Correct, it is found that the starting point and the ending point are not at the same point. How to evaluate the accuracy of the method;

    2. Slam path is a relative position. How to align it with the ENU of GPS path;

    3. The sampling rates of GPS and slam path are different, and the time axis is also misplaced. How does RMSE w.r.t GPS evaluate it.

    stale 
    opened by liu19980515 16
  • About external parameters

    About external parameters

    Hello here, the data is from KITTI dataset, I gave sensor information and internal and external parameters in yaml as required, but every time I open lidar for depth info in params_camera.yaml, the vio part will crash with the following error. Is this a problem with external parameter settings? image

    stale 
    opened by winnnda 7
  • Does LVI-SAM really improve performance?

    Does LVI-SAM really improve performance?

    After reading your paper and code, thank you very much for your work. But I have the following questions:

    1. The laser odometry only uses the result of the visual odometry as the initial optimization value. This should be loosely coupled.
    2. Compared with LIO-SAM, LVI-SAM only uses the result of the visual odometry as the initial value of scanning matching, and does not improve the laser odometry. Can the performance of the LVI-SAM really be significantly improved?
    3. The results of laser odometry, visual odometry, and IMU pre-integration are used in the paper to perform factor graph optimization. But only the results of the laser odometry and IMU are used in the code. The laser odometry has higher estimation accuracy than the visual odometry, so adding the result of the visual odometry to the factor graph will reduce the optimization accuracy, right?
    stale 
    opened by robosu12 7
  • Process has died

    Process has died

    Hello, when I run LVI Sam on the Jetson NX development board,catkin_make has been completed,but when I tried roslaunch, I encountered the following node death problem

    [ INFO] [1636441314.154527795]: ----> Visual Odometry Estimator Started. [ INFO] [1636441314.487589783]: ----> Visual Loop Detection Started. [ INFO] [1636441314.585487422]: ----> Visual Feature Tracker Started. [lvi_sam_visual_odometry-9] process has died [pid 13242, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_odometry __name:=lvi_sam_visual_odometry __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_odometry-9.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_odometry-9*.log [lvi_sam_visual_odometry-9] restarting process process[lvi_sam_visual_odometry-9]: started with pid [13528] [ INFO] [1636441314.871099244]: ----> Visual Odometry Estimator Started. [ INFO] [1636441314.993614341]: ----> Lidar IMU Preintegration Started. [lvi_sam_visual_loop-10] process has died [pid 13249, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_loop __name:=lvi_sam_visual_loop __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_loop-10.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_loop-10*.log [lvi_sam_visual_loop-10] restarting process [ INFO] [1636441315.091523468]: ----> Lidar Cloud Deskew Started. process[lvi_sam_visual_loop-10]: started with pid [13703] [ INFO] [1636441315.169702089]: ----> Lidar Feature Extraction Started. [lvi_sam_visual_feature-8] process has died [pid 13231, exit code -11, cmd /home/nvidia/catkin_sam/devel/lib/lvi_sam/lvi_sam_visual_feature __name:=lvi_sam_visual_feature __log:=/home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_feature-8.log]. log file: /home/nvidia/.ros/log/e982fd3c-412a-11ec-ad02-48b02d3da899/lvi_sam_visual_feature-8*.log

    My environment is: Ubuntu18.04 ROS melodic gtsam4.0.2 ceres1.14.0 opencv4.1.1 pcl1.8

    How can I solve the problem?

    stale 
    opened by Stephen1e 6
  • large velocity or bias, reset IMU-preintegration!

    large velocity or bias, reset IMU-preintegration!

    There will be a significant drop in the z-axis after around 2mins running. After then, LVI-SAM will become totally unstable. Is there anybody who comes across this issue? Screenshot from 2021-10-13 18-54-30 Screenshot from 2021-10-13 18-55-05

    opened by SnowCarter 5
  • question about

    question about "q_lidar_to_cam q_lidar_to_cam_eigen"

    Hi,thanks for your great works. the code test well with your data,but it seems the vins program did not work well with my data. I’m confused about what the “question about q_lidar_to_cam q_lidar_to_cam_eigen” mean in the code. hope your reply ,thanks agian. @TixiaoShan

    stale 
    opened by HeXu1 5
  • malloc(): memory corruption

    malloc(): memory corruption

    malloc(): memory corruption [lvi_sam_mapOptmization-6] process has died [pid 13598, exit code -6, cmd /home/mwy/lvisam/devel/lib/lvi_sam/lvi_sam_mapOptmization __name:=lvi_sam_mapOptmization __log:=/home/mwy/.ros/log/f460d3d2-9099-11ec-a978-9061ae86e6b5/lvi_sam_mapOptmization-6.log]. log file: /home/mwy/.ros/log/f460d3d2-9099-11ec-a978-9061ae86e6b5/lvi_sam_mapOptmization-6*.log

    How to solve this problem? I install the gtsam following your introduction. I am sure I have used cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..
    And my pcl version is 1.8.

    stale 
    opened by DavidNY123 4
  • Is deskewing correct?

    Is deskewing correct?

      void findPosition(double relTime, float *posXCur, float *posYCur, float *posZCur)
      {
          *posXCur = 0; *posYCur = 0; *posZCur = 0;
    
          // if (cloudInfo.odomAvailable == false || odomDeskewFlag == false)
          //     return;
    
          // float ratio = relTime / (timeScanNext - timeScanCur);
    
          // *posXCur = ratio * odomIncreX;
          // *posYCur = ratio * odomIncreY;
          // *posZCur = ratio * odomIncreZ;
      }
    

    As far as I understand, this will result in the pcl being deskewed in rotation only, is this correct?

    stale 
    opened by juliangaal 4
  • Can't reproduce result with handheld.bag

    Can't reproduce result with handheld.bag

    Thank you for share awesome work.

    I can't reproduce result with shared dataset(https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing).

    It works fine at the beginning. However, at several points, displayed the warning messages and the trajectory was drifted. also, I checked library version. (gtsam-4.0.2, ceres-solver-1.14.0)

    Could PC specs have an impact? I installed Linux on Macbook pro A1707. Processo : 2.6 GHz Intel Core i7 (I7-6700HQ) RAM: 16GB GPU: Radeon Pro 450

    Warning message: Large bias, reset IMU-preintegration! Large velocity, reset IMU-preintegration!

    stale 
    opened by smwgf 4
  • Test bag(more than 10G) couldn't be downloaded from the google drive

    Test bag(more than 10G) couldn't be downloaded from the google drive

    Hello @TixiaoShan, Thanks for your great work! I couldn't download the test data from the google drive due to it's size is larger than 10G. Could you consider to upload your data to Baidu Netdisk? This may be convenient to people in China.

    opened by wwtinwhu 4
  • Experiment on NTU VIRAL datasets

    Experiment on NTU VIRAL datasets

    Hi Tixiao,

    I am trying to do some experiments of LVI SAM on the NTU VIRAL public dataset (download page: https://ntu-aris.github.io/ntu_viral_dataset/). Hence, I will include LVI-SAM to the list of applicable methods to NTU VIRAL website.

    However LVI SAM diverges quickly after a few minutes. This does not happen if the visual nodes are disabled. Could you please take a look at the configurations and suggest the best configurations?

    The forked repository can be found here: https://github.com/brytsknguyen/LVI-SAM

    Working examples of LIO SAM and VINs-Mono on NTU VIRAL datasets can be found here: https://github.com/brytsknguyen/LIO-SAM https://github.com/brytsknguyen/VINS-Mono

    opened by brytsknguyen 3
  • Moving Objects in SLAM Output

    Moving Objects in SLAM Output

    Hi there!

    Does anyone have any thoughts or suggestions on how / whether to use LVI-SAM in more dynamic environments (lots of moving objects including urban environments or indoor scenes)?

    I saw previously that LIO-SAM you can utilize Scan Context + Removert and there was a recent paper RF-LIO that focused on object removal, but I would like to utilize the additional VIO data for better map quality while also being able to filter out for moving objects (ideally I would also like to get a track for the moving objects as well or at least the points associated).

    Any thoughts on this?

    opened by bfan1256 1
  • Add the save trajectory function

    Add the save trajectory function

    @TixiaoShan
    Hi, Thanks for your great work.
    I add some functions to save Trajectory in the txt file.
    This file is in TUM format, so it could directly visualize using evo tool.

    Below is a visualization of the results of the M2DGR dataset using evo tool.

    result_lvi-sam

    I hope this PR is helpful.

    Thanks,

    opened by Taeyoung96 4
  • difference between code and article

    difference between code and article

    The sentence "The constraints from visual odometry, lidar odometry, IMU preintegration, and loop closure are optimized jointly in the factor graph" is mentioned in related articles. but it seems that visual constrains and inertial constrains are not contained in SAM optimization of mapOptimization.cpp file.

    stale 
    opened by wjf1997 1
  • calibration of fisheye and Lidar

    calibration of fisheye and Lidar

    Hi! Thanks for your excellent work. I am running your algorithm, but I do not know how to calibrate fisheye and Lidar. could you please tell me how do you calibrate and which algorithm you use to finish calibration?

    Many thanks! :)

    stale 
    opened by kakghiroshi 3
  • Question about Lidar-Inertial System Fail detection

    Question about Lidar-Inertial System Fail detection

    @TixiaoShan
    Thanks for your great work!
    In the LVI-SAM paper, this algorithm adapted LIS(LiDAR inertial system) fail detection.

    However, maybe because I'm an newbie, I couldn't find which part of the code it was.
    Which part of the code has the LIS fail detection part implemented?

    Thanks,

    stale 
    opened by Taeyoung96 2
Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Poisson Surface Reconstruction for LiDAR Odometry and Mapping Surfels TSDF Our Approach Table: Qualitative comparison between the different mapping te

Photogrammetry & Robotics Bonn 305 Dec 21, 2022
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

Pengwei Zhou 183 Dec 1, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surface-emitting lasers, nano-antennas, and more.

Alex Song 17 Dec 19, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 2, 2023
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 6, 2023
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
Image Processing, Image Smoothing, Edge Detection and Transforms

opevcvdl-hw1 This project uses openCV and Qt to achieve the requirements. Version Python 3.7 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.1

Kenny Cheng 3 Aug 17, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
Implementation of Online Label Smoothing in PyTorch

Online Label Smoothing Pytorch implementation of Online Label Smoothing (OLS) presented in Delving Deep into Label Smoothing. Introduction As the abst

null 83 Dec 14, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

null 3 Aug 3, 2022
Face Identity Disentanglement via Latent Space Mapping [SIGGRAPH ASIA 2020]

Face Identity Disentanglement via Latent Space Mapping Description Official Implementation of the paper Face Identity Disentanglement via Latent Space

null 150 Dec 7, 2022