Continuous Time LiDAR odometry

Related tags

Deep Learning ct_icp
Overview

CT-ICP: Elastic SLAM for LiDAR sensors

LUCO_GIF NCLT_GIF

This repository implements the SLAM CT-ICP (see our article), a lightweight, precise and versatile pure LiDAR odometry.

It is integrated with the python project pyLiDAR-SLAM which gives access to more datasets. pyLiDAR-SLAM requires the installation of the python binding for CT-ICP (see below).

Installation

Ubuntu
.\ct_icp_build.sh Release "Unix Makefiles" ON ON  # Builds the project in "Release" mode, with "Unix Makefiles" cmake generator, with python binding and with the visualization activated
source env.sh                                     # Setup the environment (.so locations) 
.\slam -c default_config.yaml                     # Launches the SLAM
Windows 10 sous PowerShell
.\ct_icp_build.bat                  # Builds the project
.\env.bat                           # Setup the environment (.so locations) 
.\slam.exe -c default_config.yaml   # Launches the SLAM

To modify options (for viz3d support, or python binding) for the windows script, you can directly modify the ct_icp_build.bat file.

Python binding

The steps below will install a python package named pyct_icp:

  • Generate the cmake project with the following arguments (Modify ct_icp_build.sh):

    • -DWITH_PYTHON_BINDING=ON: Activate the option to build the python binding
    • -DPYTHON_EXECUTABLE=<path-to-target-python-exe>: Path to the target python executable
  • Go into the build folder (e.g cd ./cmake-Release)

  • Build the target pyct_icp with make pyct_icp -j6

  • Install the python project pip install ./src/binding

Note: This step is required to use CT-ICP with pyLiDAR-SLAM.

Install the Datasets

The Datasets are publicly available at: https://cloud.mines-paristech.fr/index.php/s/UwgVFtiTOmrgKp5

Each dataset is a .zip archive containing the PLY scan file with the relative timestamps for each point in the frame, and if available, the ground truth poses.

To install each dataset, simply download and extract the archives on disk. The datasets are redistributions of existing and copyrighted datasets, we only offer a convenient repackaging of these datasets.

The dataset available are the following:

Under Creative Commons Attribution-NonCommercial-ShareAlike LICENCE

  • KITTI (see eval_odometry.php):
    • The most popular benchmark for odometry evaluation.
    • The sensor is a Velodyne HDL-64
    • The frames are motion-compensated (no relative-timestamps) and the Continuous-Time aspect of CT-ICP will not work on this dataset.
    • Contains 21 sequences for ~40k frames (11 with ground truth)
  • KITTI_raw (see eval_odometry.php): :
    • The same dataset as KITTI without the motion-compensation, thus with meaningful timestamps.
    • The raw data for sequence 03 is not available
  • KITTI_360 (see KITTI-360):
    • The successor of KITTI, contains longer sequences with timestamped point clouds.
    • The sensor is also a Velodyne HDL-64

Permissive LICENSE

  • NCLT: (see nclt)
    • Velodyne HDL-32 mounted on a segway
    • 27 long sequences (up to in the campus of MICHIGAN university over a long
    • Challenging motions (abrupt orientation changes)
    • NOTE: For this dataset, directly download the Velodyne links (e.g. 2012-01-08_vel.tar). Our code directly reads the velodyne_hits.bin file.
  • KITTI-CARLA: (see and cite KITTI-CARLA):
    • 7 sequences of 5000 frames generated using the CARLA simulator
    • Imitates the KITTI sensor configuration (64 channel rotating LiDAR)
    • Simulated motion with very abrupt rotations
  • ParisLuco (published with our work CT-ICP, cf below to cite us):
    • A single sequence taken around the Luxembourg Garden
    • HDL-32, with numerous dynamic objects

Running the SLAM

Usage

> chmod+x ./env.sh    # Set permission on unix to run env.sh
> ./env.sh            # Setup environment variables 
> ./slam -h           # Display help for the executable 

USAGE:

slam  [-h] [--version] [-c <string>] [-d <string>] [-j <int>] [-o
<string>] [-p <bool>] [-r <string>]


Where:

-c <string>,  --config <string>
Path to the yaml configuration file on disk

-o <string>,  --output_dir <string>
The Output Directory

-p <bool>,  --debug <bool>
Whether to display debug information (true by default)

--,  --ignore_rest
Ignores the rest of the labeled arguments following this flag.

--version
Displays version information and exits.

-h,  --help
Displays usage information and exits.

Selecting the config / setting the options

To run the SLAM call (on Unix, adapt for windows), please follow the following steps:

  1. Modify/Copy and modify one of the default config (default_config.yaml, robust_high_frequency_config.yaml or robust_driving_config.yaml) to suit your needs. Notably: change the dataset and dataset root_path dataset_options.dataset and dataset_options.root_path.

  2. Launch the SLAM with command: ./slam -c <config file path, e.g. default_config.yaml> # Launches the SLAM on the default config

  3. Find the trajectory (and optionally metrics if the dataset has a ground truth) in the output directory

Citation

If you use our work in your research project, please consider citing:

@misc{dellenbach2021cticp,
  title={CT-ICP: Real-time Elastic LiDAR Odometry with Loop Closure},
  author={Pierre Dellenbach and Jean-Emmanuel Deschaud and Bastien Jacquet and François Goulette},
  year={2021},
  eprint={2109.12979},
  archivePrefix={arXiv},
  primaryClass={cs.RO}
}

TODO

  • Make a first version of the documentation
  • Save both poses for each TrajectoryFrame
  • Fix bugs / Improve code quality (doc/comments/etc...)
  • Add a wiki (documentation on the code)
  • Add point-to-distribution cost
  • Improve the robust regime (go faster and find parameters for robust and fast driving profile)
  • Increase speed
  • Add Unit Tests
  • Github CI
  • Improve visualization / Interaction for the OpenGL Window
  • Improve the python binding (reduce the overhead)
  • Write ROS packaging
Comments
  • How to reproduce results for NCLT

    How to reproduce results for NCLT

    Hello @jedeschaud, sorry to disturb you.

    We are trying to reproduce the results for ct-icp on the NCLT dataset, and we couldn't succeed so far. How can I get this number? image

    Checking the implementation, and using the original velodyne_hits.bin file, I also see that you are processing 42764 in total, where the velodyne_sync folder contains only 28127 scans. How did you guys evaluate the results of the system?

    Moreover, which ground truth poses were used to carry on the evaluation? According to the implementation, https://github.com/jedeschaud/ct_icp/blob/1ba7ce704e9994d39076089ea3fc0dc4d856fe84/src/ct_icp/dataset.cpp#L151

    // Returns the Path to the Ground Truth file for the given sequence
    // Note: The name of the sequence is not checked
    inline std::string ground_truth_path(const DatasetOptions &options,
                                         const std::string &sequence_name) {
        std::string ground_truth_path = options.root_path;
        if (ground_truth_path.size() > 0 && ground_truth_path[ground_truth_path.size() - 1] != '/')
            ground_truth_path += '/';
    
        switch (options.dataset) {
            case NCLT:
                throw std::runtime_error("Not Implemented!");
        }
        return ground_truth_path;
    }
    

    Thanks a lot in advance

    opened by nachovizzo 15
  • libgflags link error

    libgflags link error

    Hi,thanks for your work when I run "./slam -c robust_high_frequency_config.yaml" ,the output is "error while loading shared libraries: libgflags.so.2.2: cannot open shared object file: No such file or directory". I have install gflag in “usr/local/”. And I can't find where link gflags in cmakelists.txt. hope for your reply,thanks a lot.

    opened by HeXu1 12
  • Running ct_icp using another dataset

    Running ct_icp using another dataset

    Hi, I have recenlty built and run ct_icp successfully!
    But I have difficulty running another dataset.
    I wonder how ct_icp exactly knows about timestamp between frames.
    While studying your paper and code, I found something new.

    Like this default_config.yaml (permalink highlighted), I found that ct_icp code uses KITTI dataset in the form of frame_####.ply format.
    But It cannot work on my own data(*.ply)


    When I visualize it on CloudCompare,
    here the **red** one is **frame_0001.ply**, the **black** one is **frame_0005.ply**.
    We can think the ego vehicle moves forward.

    Selection_012


    Here, I found something new.

    In CloudCompare I got to know the ply file of KITTI has x, y, z and timestamp fields.

    Selection_014

    On the other hand, I have my own data like

    Selection_013

    I think the difference is whether it has a field of timestamp(PLY_FLOAT32)


    Here is my real question!
    Could you explain the ply format and how ct_icp uses that format briefly if you can.
    Plus, I want to convert my .bag/.pcd file to .ply which has only x, y, z and timestamp file format.
    I couldn't find any solution about that.

    Best regards.

    opened by bigbigpark 8
  • Cmake Error in Ubuntu20.04

    Cmake Error in Ubuntu20.04

    $ cmake .. -DCMAKE_BUILD_TYPE=Release -DSUPERBUILD_INSTALL_DIR=/home/lingbo/open-source-project/ct_icp/install -DSLAMCORE_INSTALL_DIR=/home/lingbo/open-source-project/ct_icp/install/SlamCore -DCT_ICP_INSTALL_DIR=/home/lingbo/open-source-project/ct_icp/install/CT_ICP -DEIGEN_DIR=/home/lingbo/open-source-project/ct_icp/install/Eigen3/share/eigen3/cmake INFO [Superbuild] -- Successfully found target glog::glog INFO [Superbuild] -- Successfully Found Target Eigen3::Eigen -- Found required Ceres dependency: Eigen version 3.3.7 in /home/lingbo/open-source-project/ct_icp/install/Eigen3/include/eigen3 -- Found required Ceres dependency: glog -- Found required Ceres dependency: gflags -- Found Ceres version: 2.0.0 installed in: /home/lingbo/open-source-project/ct_icp/install/Ceres with components: [EigenSparse, SparseLinearAlgebraLibrary, LAPACK, SuiteSparse, CXSparse, SchurSpecializations, Multithreading] INFO [Superbuild] -- Successfully found target Ceres::ceres INFO [Superbuild] -- Successfully found target yaml-cpp INFO [Superbuild] -- Successfully found target GTest::gtest INFO [Superbuild] -- Successfully found target cereal INFO [Superbuild] -- Successfully found target tclap::tclap INFO [Superbuild] -- Successfully found target tsl::robin_map INFO [Superbuild] -- Successfully found target nanoflann::nanoflann INFO [Superbuild] -- Successfully found target colormap::colormap INFO [Superbuild] -- Successfully found target tinyply::tinyply CMake Error at cmake/external.cmake:13 (get_target_property): INTERFACE_LIBRARY targets may only have whitelisted properties. The property "IMPORTED_RELEASE_LOCATION" is not allowed. Call Stack (most recent call first): CMakeLists.txt:51 (include)

    INFO Eigen3::Eigen NOTFOUND INFO /home/lingbo/open-source-project/ct_icp/install/Ceres/lib/libceres.so.2.0.0 INFO /home/lingbo/open-source-project/ct_icp/install/glog/lib/libglog.so.0.5.0 CMake Error at cmake/external.cmake:13 (get_target_property): INTERFACE_LIBRARY targets may only have whitelisted properties. The property "IMPORTED_RELEASE_LOCATION" is not allowed. Call Stack (most recent call first): CMakeLists.txt:51 (include)

    INFO tsl::robin_map NOTFOUND INFO /home/lingbo/open-source-project/ct_icp/install/yaml-cpp/lib/libyaml-cpp.so.0.6.3 CMake Error at cmake/external.cmake:13 (get_target_property): INTERFACE_LIBRARY targets may only have whitelisted properties. The property "IMPORTED_RELEASE_LOCATION" is not allowed. Call Stack (most recent call first): CMakeLists.txt:51 (include)

    INFO colormap::colormap NOTFOUND INFO /home/lingbo/open-source-project/ct_icp/install/tinyply/lib/libtinyply.so INFO -- [CT-ICP] -- Appending to the INSTALL RPATH the RPATH to the external libraries: [:/home/lingbo/open-source-project/ct_icp/install/Ceres/lib:/home/lingbo/open-source-project/ct_icp/install/glog/lib:/home/lingbo/open-source-project/ct_icp/install/yaml-cpp/lib:/home/lingbo/open-source-project/ct_icp/install/tinyply/lib] INFO [CT_ICP] -- "WITH_GTSAM=OFF gtsam dependent targets will not be built" -- Configuring incomplete, errors occurred! See also "/home/lingbo/open-source-project/ct_icp/cmake-build-release/CMakeFiles/CMakeOutput.log". See also "/home/lingbo/open-source-project/ct_icp/cmake-build-release/CMakeFiles/CMakeError.log".

    opened by lingbo-yu 7
  • Superbuild issue

    Superbuild issue

    when I superbuild i met this issue

    cmake --build . --config Release Scanning dependencies of target MappingResearchKEU_superbuild [ 12%] Creating directories for 'MappingResearchKEU_superbuild' [ 25%] Performing download step (git clone) for 'MappingResearchKEU_superbuild' Cloning into 'MappingResearchKEU_superbuild'... Already on 'master' Your branch is up to date with 'origin/master'. [ 37%] No patch step for 'MappingResearchKEU_superbuild' [ 50%] Performing update step for 'MappingResearchKEU_superbuild' Current branch master is up to date. [ 62%] Performing configure step for 'MappingResearchKEU_superbuild' -- The C compiler identification is GNU 9.4.0 -- The CXX compiler identification is GNU 9.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done INFO [Superbuild] -- [Generation] -- Generating GTest dependency INFO [Superbuild] -- [Generation] -- Generating GLOG dependency INFO [Superbuild] -- [Generation] -- Generating Eigen3 dependency INFO [Superbuild] -- [Generation] -- Generating Ceres dependency INFO [Superbuild] -- [Generation] -- Generating yaml-cpp dependency INFO [Superbuild] -- [Generation] -- Generating cereal dependency INFO [Superbuild] -- [Generation] -- Generating tessil dependency INFO [Superbuild] -- [Generation] -- Generating nanoflann dependency INFO [Superbuild] -- [Generation] -- Generating tclap dependency CMake Error at CMakeLists.txt:527 (add_library): add_library INTERFACE library requires no source arguments.

    CMake Error at CMakeLists.txt:528 (target_include_directories): Cannot specify include directories for target "tclap" which is not built by this project.

    CMake Error at CMakeLists.txt:468 (install): install TARGETS given target "tclap" which does not exist. Call Stack (most recent call first): CMakeLists.txt:531 (SUPERBUILD_INSTALL_TARGET)

    INFO [Superbuild] -- [Generation] -- Generating colormap dependency INFO [Superbuild] -- [Generation] -- Generating tinyply dependency -- Configuring incomplete, errors occurred! See also "/home/user/rosProject/ct_icp/.cmake-build-superbuild/MappingResearchKEU_superbuild/src/MappingResearchKEU_superbuild-build/CMakeFiles/CMakeOutput.log". make[2]: *** [CMakeFiles/MappingResearchKEU_superbuild.dir/build.make:107: MappingResearchKEU_superbuild/src/MappingResearchKEU_superbuild-stamp/MappingResearchKEU_superbuild-configure] Error 1 make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/MappingResearchKEU_superbuild.dir/all] Error 2 make: *** [Makefile:84: all] Error 2

    My ubuntu is 20.04

    opened by liangyongshi 7
  • kitti timestamps

    kitti timestamps

    great appreciate with opening your great work! first, can you provide dataset passcord to me, no one relpy in mail, second, And when I see kitti website, the raw point cloud data do not have time information except scan begin and end time, How do you get the time information?

    thanks a lot best wishes!

    good first issue 
    opened by dongfangzhou1108 7
  • how can i use kitti odometry data?

    how can i use kitti odometry data?

    hi,appreciate your great work, but when i use kitti data,which have no distortion in http://www.cvlibs.net/datasets/kitti/eval_odometry.php, that is mean there is no use to Update keypoints when run CT_ICP_GN() function, and i aslo think we do not need to use pose and velocity constraint in the paper, what should i do to modify the code?

    thanks a lot

    opened by dongfangzhou1108 6
  • fail to clone imgui.git

    fail to clone imgui.git

    Hi,thanks for your work. it seems download imgui.git failed, when I run “./ct_icp_build.sh Release "Unix Makefiles" ON ON”.

    and the terminal outputs like this: [ 11%] Performing download step (git clone) for 'imgui-populate' Cloning into 'imgui-src'... Permission denied (publickey). fatal: Could not read from remote repository.

    Please make sure you have the correct access rights and the repository exists. Cloning into 'imgui-src'... Permission denied (publickey). fatal: Could not read from remote repository.

    Please make sure you have the correct access rights and the repository exists. Cloning into 'imgui-src'... Permission denied (publickey). fatal: Could not read from remote repository.

    Please make sure you have the correct access rights and the repository exists. -- Had to git clone more than once: 3 times. CMake Error at imgui-subbuild/imgui-populate-prefix/tmp/imgui-populate-gitclone.cmake:31 (message): Failed to clone repository: '[email protected]:pierdell/imgui.git'

    hope for your reply. thanks.

    opened by HeXu1 6
  • why my Elapsed Search Neighbors cost too much? can you answer for me,thanks

    why my Elapsed Search Neighbors cost too much? can you answer for me,thanks

    thanks for openning your code.

    Number of points in sub-sampled frame: 35679 / 126902 Initial ego-motion distance: 0 Elapsed Normals: 35.0737 Elapsed Search Neighbors: 567.692 Elapsed A Construction: 0 Elapsed Select closest: 2.18221 Elapsed Solve: 0.222377 Elapsed Solve: 8.04642 Number iterations CT-ICP : 5 Elapsed Elastic_ICP: 681.094 Number of Keypoints extracted: 4640 / Actual number of residuals: 4180 Trajectory correction [begin(t) - end(t-1)]: 0 Final ego-motion distance: 0.29746 Average Load Factor (Map): 0.324829 Number of Buckets (Map): 16384 Number of points (Map): 39514 Elapsed Time: 683.08 (ms)

    here is my log info of one frame, why my Elapsed Search Neighbor cost too much?

    best wishes

    good first issue 
    opened by dongfangzhou1108 6
  • GN vs Ceres Optimization

    GN vs Ceres Optimization

    Hello, thanks for this great work! I saw that the default optimization (in default_config.yaml) is using the Gauss Newton optimization and for the robust configs Ceres is used. The Jacobians for the rotation look for me like an approximation. Is this true, and if yes do you think the approximation error is relevant?

    Is Ceres mainly used for the robust loss functions or also to get better Jacobians by the autodiff?

    Thanks and best regards, Louis

    opened by louis-wiesmann 4
  • How to run this on MulRan dataset

    How to run this on MulRan dataset

    Hello again, thanks for your contribution. I'm trying to run this code on the MulRan dataset without success.

    If you ever ran this code on the dataset, can you please provide some pointers?

    I also tried to run with the "PLY_DIRECTORY" with no success:

    WARNING: Logging before InitGoogleLogging() is written to STDERR
    I20220725 11:23:17.709126 2555704 slam.cpp:351] Creating directory .outputs/
    I20220725 11:23:17.713934 2555704 dataset.cpp:278] Found Sequence PLY_DIR
    terminate called after throwing an instance of 'std::runtime_error'
      what():  PLY Directory is not supported by read_pointcloud. See the DirectoryIterator.
    [1]    2555704 IOT instruction (core dumped)  ./slam -c default_config.yaml
    

    Inspecting the code, I see there is no method to read the PLY directory. https://github.com/jedeschaud/ct_icp/blob/1ba7ce704e9994d39076089ea3fc0dc4d856fe84/src/ct_icp/dataset.cpp#L306

    Related to #27, #26, #21

    opened by nachovizzo 3
  • Tune slam output velocity

    Tune slam output velocity

    Hello, I am able to run the slam and it works really good, but especially at the beginning of the motion I am facing a possible computation time problem. I am using an Ouster OS0-64, and for what concern the motion on Y an Z I did not notice any problem, but when I walk with my sensor for 10m X remain around the initial value until I reach ~7m. After the output value start to change really fast and reach the correct value (and mantain it for almost all the test). To be more clear:

    Real distance(m) | 0 | 0.5 | 1.0  | 1.5  | 2.0     | 2.5  | 3.0  | 3.5 | 4.0    | 4.5   | 5.0   | 5.5 | 6.0   | 6.5  | 7.0 | 8.0 | 9.0 | 10.0 | 10.0 | 10.0 | 
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    /ct_icp/pose(m)  | 0 | 0.1 | -0.2 | 0.015 | -0.03 | 0.3 | 0.40 | -0.2 | 0.10   | 0.15 | 0.16 |  0.2 | 0.10   | 0.15   | 1.0| 3.0 | 5.0 | 7.0 | 9.0 | 10.0  | 
    

    Now my PC CPUs are not in saturation, and them speed is 4.8Ghz so I do not think is an hardware problem, the ram occupation is also low (7Gb/16Gb)

    There is a portion of code to modify in order to speed up the computation of the output pose? I have try to change some parameters in ct_icp/include/ct_icp/odometry.h but nothing has changed.

    opened by Cristian-wp 0
  • Python bindings

    Python bindings

    Update Python Bindings, in order to be able to run ct_icp within pylidar-SLAM

    TODOS:

    SlamCore

    • [x] Point cloud API
    • [x] Basic types
    • [x] IO methods
    • [x] Algorithms

    CT-ICP

    • [x] Datasets
    • [x] Map & Neighborhood
    • [x] CT_ICP
    • [x] Odometry

    TESTS, CI, Doc

    • [ ] Add test_binding.py to the Github CI
    • [x] Documentation on the Readme / Wiki for the python bindings
    • [ ] Regression tests on several (shorten) datasets
    opened by pdell-kitware 0
  • How to run with pyLiDAR-SLAM on KITTI-corrected?

    How to run with pyLiDAR-SLAM on KITTI-corrected?

    Hello authors. Thank you all for your excellent work. It's amazing!

    I can reproduce the results of KITTI-corrected in your paper (Table I), and I wish to run your ct_icp with pyLiDAR-SLAM (as it produces a loop closure function if I understand your code correctly). However, in your repo and pyLiDAR-SLAM, I cannot find any script to execute pyLiDAR-SLAM with your ct_icp module.

    Could you please let me know how to run it or if you could directly upload the pose results of KITTI-corrected (with loop closure)?

    I sincerely appreciate your work and consideration! Thanks again!

    opened by samsdolphin 5
  • Run ct_icp on Jetson Nano

    Run ct_icp on Jetson Nano

    Hello, I would like to run this slam on my Jetson Nano 4Gb. I have manage to install and build on it, but even if I use Ceres as solver, I can not manage to run the solver on the board GPU. I know that only some type of Ceres option are currently supperted by CUDA:

    "CUDA If you have an NVIDIA GPU then Ceres Solver can use it accelerate the solution of the Gauss-Newton linear systems using the CMake flag USE_CUDA. Currently this support is limited to using the dense linear solvers that ship with CUDA. As a result GPU acceleration can be used to speed up DENSE_QR, DENSE_NORMAL_CHOLESKY and DENSE_SCHUR. This also enables CUDA mixed precision solves for DENSE_NORMAL_CHOLESKY and DENSE_SCHUR. Optional."

    So, I would like to know which dense linear solver have you use.

    opened by Cristian-wp 3
  • Parameters meaning

    Parameters meaning

    Hi, I need to change the configuration for my dataset. Where I can find the parameter meanings? There is an explanation about how they influence the output?

    opened by Cristian-wp 2
  • Run ct_icp on Darpa tunnel circuit datasets

    Run ct_icp on Darpa tunnel circuit datasets

    Hi, I have manage to make the slam work on Darpa urban datasets, now I am trying to test it in the tunnel ciruits. As you links say, them al compressed, so I decompress all of them with rosbag decompress file.bag . I have correctly remap the pointcloud topic, lidar model and frequency are the same and at the moment I have not change anything inside the param yaml file. When I launch the simulation the simulation starts, but the odometry topic /ct_icp/pose/odom is not published. As you can see from the following images, also the tf from odom to base_link is not generated.

    This image are the TF from the urban datasets: Screenshot from 2022-11-03 10-50-21

    This one are the TF from the tunnel: Screenshot from 2022-11-03 12-50-19

    This one is a screenshot from RViz after some seconds: as you can see the position frame are not published even if the robot is already inside the tunnel (the robot stars outside the tunnel)

    rviz_screenshot_2022_11_03-12_42_01

    This one are the topic from rqt: Screenshot from 2022-11-03 11-38-45

    What I am doing wrong? Can someone please help me?

    opened by Cristian-wp 2
Releases(icra_2022)
Owner
null
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 1.1k Dec 27, 2022
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

null 1 Nov 1, 2021
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 9, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

Overlap-based 3D LiDAR Monte Carlo Localization This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D

Photogrammetry & Robotics Bonn 219 Dec 15, 2022
This repository is an open-source implementation of the ICRA 2021 paper: Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling.

Locus This repository is an open-source implementation of the ICRA 2021 paper: Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order

Robotics and Autonomous Systems Group 96 Dec 15, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
DLL: Direct Lidar Localization

DLL: Direct Lidar Localization Summary This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aeri

Service Robotics Lab 127 Dec 16, 2022
LiDAR R-CNN: An Efficient and Universal 3D Object Detector

LiDAR R-CNN: An Efficient and Universal 3D Object Detector Introduction This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object

TuSimple 295 Jan 5, 2023
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022