Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Overview

Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Surfels TSDF Our Approach
suma tsdf puma

Table: Qualitative comparison between the different mapping techniques for sequence 00 of the KITTI odometry benchmark.

This repository implements the algorithms described in our paper Poisson Surface Reconstruction for LiDAR Odometry and Mapping.

This is a LiDAR Odometry and Mapping pipeline that uses the Poisson Surface Reconstruction algorithm to build the map as a triangular mesh.

We propose a novel frame-to-mesh registration algorithm where we compute the poses of the vehicle by estimating the 6 degrees of freedom of the LiDAR. To achieve this, we project each scan to the triangular mesh by computing the ray-to-triangle intersections between each point in the input scan and the map mesh. We accelerate this ray-casting technique using a python wrapper of the Intel® Embree library.

The main application of our research is intended for autonomous driving vehicles.

Table of Contents

Running the code

NOTE: All the commands assume you are working on this shared workspace, therefore, first cd apps/ before running anything.

Requirements: Install docker

If you plan to use our docker container you only need to install docker and docker-compose.

If you don't want to use docker and install puma locally you might want to visit the Installation Instructions

Datasets

First, you need to indicate where are all your datasets, for doing so just:

export DATASETS=<full-path-to-datasets-location>

This env variable is shared between the docker container and your host system(in a read-only fashion).

So far we've only tested our approach on the KITTI Odometry benchmark dataset and the Mai city dataset. Both datasets are using a 64-beam Velodyne like LiDAR.

Building the apss docker container

This container is in charge of running the apss and needs to be built with your user and group id (so you can share files). Building this container is straightforward thanks to the provided Makefile:

make

If you want' to inspect the image you can get an interactive shell by running make run, but it's not mandatory.

Converting from .bin to .ply

All our apps use the PLY which is also binary but has much better support than just raw binary files. Therefore, you will need to convert all your data before running any of the apps available in this repo.

docker-compose run --rm apps bash -c '\
    ./data_conversion/bin2ply.py \
    --dataset $DATASETS/kitti-odometry/dataset/ \
    --out_dir ./data/kitti-odometry/ply/ \
    --sequence 07
    '

Please change the --dataset option to point to where you have the KITTI dataset.

Running the puma pipeline

Go grab a coffee/mate, this will take some time...

docker-compose run --rm apps bash -c '\
    ./pipelines/slam/puma_pipeline.py  \
    --dataset ./data/kitti-odometry/ply \
    --sequence 07 \
    --n_scans 40
    '

Inspecting the results

The pipelines/slam/puma_pipeline.py will generate 3 files on your host sytem:

results
├── kitti-odometry_07_depth_10_cropped_p2l_raycasting.ply # <- Generated Model
├── kitti-odometry_07_depth_10_cropped_p2l_raycasting.txt # <- Estimated poses
└── kitti-odometry_07_depth_10_cropped_p2l_raycasting.yml # <- Configuration

You can open the .ply with Open3D, Meshlab, CloudCompare, or the tool you like the most.

Where to go next

If you already installed puma then it's time to look for the standalone apps. These apps are executable command line interfaces (CLI) to interact with the core puma code:

├── data_conversion
│   ├── bin2bag.py
│   ├── kitti2ply.py
│   ├── ply2bin.py
│   └── ros2ply.py
├── pipelines
│   ├── mapping
│   │   ├── build_gt_cloud.py
│   │   ├── build_gt_mesh_incremental.py
│   │   └── build_gt_mesh.py
│   ├── odometry
│   │   ├── icp_frame_2_frame.py
│   │   ├── icp_frame_2_map.py
│   │   └── icp_frame_2_mesh.py
│   └── slam
│       └── puma_pipeline.py
└── run_poisson.py

All the apps should have an usable command line interface, so if you need help you only need to pass the --help flag to the app you wish to use. For example let's see the help message of the data conversion app bin2ply.py used above:

Usage: bin2ply.py [OPTIONS]

  Utility script to convert from the binary form found in the KITTI odometry
  dataset to .ply files. The intensity value for each measurement is encoded
  in the color channel of the output PointCloud.

  If a given sequence it's specified then it assumes you have a clean copy
  of the KITTI odometry benchmark, because it uses pykitti. If you only have
  a folder with just .bin files the script will most likely fail.

  If no sequence is specified then it blindly reads all the *.bin file in
  the specified dataset directory

Options:
  -d, --dataset PATH   Location of the KITTI dataset  [default:
                       /home/ivizzo/data/kitti-odometry/dataset/]

  -o, --out_dir PATH   Where to store the results  [default:
                       /home/ivizzo/data/kitti-odometry/ply/]

  -s, --sequence TEXT  Sequence number
  --use_intensity      Encode the intensity value in the color channel
  --help               Show this message and exit.

Citation

If you use this library for any academic work, please cite the original paper.

@inproceedings{vizzo2021icra,
author    = {I. Vizzo and X. Chen and N. Chebrolu and J. Behley and C. Stachniss},
title     = {{Poisson Surface Reconstruction for LiDAR Odometry and Mapping}},
booktitle = {Proc.~of the IEEE Intl.~Conf.~on Robotics \& Automation (ICRA)},
codeurl   = {https://github.com/PRBonn/puma/},
year      = 2021,
}
Comments
  • ‘make’ Errors

    ‘make’ Errors

    Thank you much for your great work! And when I try to enter ‘make’ in the terminal, there was something happened as fellows. Can you help me find out what's wrong? I will appreciate it very much!Sincerely waiting for your reply!

    20220330112623

    opened by qixuema 10
  • Failed to run puma_pipepline on our datasets.

    Failed to run puma_pipepline on our datasets.

    Hello there,

    I have successfully run the demo on docker. But when I try to run it on two of our datasets, it would have the same error running the puma_pipeline. The error information are as follows:

    Creating puma_apps_run ... done
    Results will be saved to results/kitti-odometry_07_depth_10_cropped_p2l_raycasting.txt
    [scan #30] Running PSR over local_map:  74%|█████████████████████▌       [00:04<00:01,  6.67 scans/s]
    Traceback (most recent call last):
      File "./pipelines/slam/puma_pipeline.py", line 176, in <module>
        main()
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
        return self.main(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
        rv = self.invoke(ctx)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
        return callback(*args, **kwargs)
      File "./pipelines/slam/puma_pipeline.py", line 156, in main
        mesh, _ = create_mesh_from_map(
      File "/usr/local/lib/python3.8/dist-packages/puma/mesh/poisson.py", line 46, in create_mesh_from_map
        return run_poisson(pcd, depth, n_threads, min_density)
      File "/usr/local/lib/python3.8/dist-packages/puma/mesh/poisson.py", line 37, in run_poisson
        vertices_to_remove = densities < np.quantile(densities, min_density)
      File "<__array_function__ internals>", line 5, in quantile
      File "/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py", line 3930, in quantile
        return _quantile_unchecked(
      File "/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py", line 3937, in _quantile_unchecked
        r, k = _ureduce(a, func=_quantile_ureduce_func, q=q, axis=axis, out=out,
      File "/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py", line 3515, in _ureduce
        r = func(a, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py", line 4050, in _quantile_ureduce_func
        n = np.isnan(ap[-1])
    IndexError: index -1 is out of bounds for axis 0 with size 0
    ERROR: 1
    
    opened by yutouwd 9
  • Questions for the pointcloud Poisson reconstruction.

    Questions for the pointcloud Poisson reconstruction.

    Thanks for your great work and open code!

    In your code, you accumulate 30 scans of point cloud and then perform Poisson reconstruction. I want to know if I can do it with a scan of point cloud. Or in other words, can I get a good Poisson reconstruction surface using a scan of point cloud? If you have any advices for this question, I will be very grateful!

    Thanks in advance!

    opened by jikerWRN 3
  • The speed gradually slows down!

    The speed gradually slows down!

    Hello, this job looks very good. So I ran it successfully with a few minor modifications to fit my environment, but it seems that the speed of processing a frame of point cloud is getting slower. Is the reason that every frame will match all the previous meshs?

    opened by CuberrChen 3
  • Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?

    Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?

    Hello,I have installed docker and docker compose according to the documentation. In the third step, I try to building the apss docker container. I entered 'make' command in the terminal, But error occur as follow: 2021-06-20 21-50-07屏幕截图 I entered 'sudo docker-compose' command in the terminal to solve this problem. But error occur as follow: 2021-06-20 21-57-06屏幕截图 So I come here to search for your help. Thank you.

    opened by captain-xuwenqiang 3
  • which version of python and ubuntu should I use

    which version of python and ubuntu should I use

    1.Should I use python 3.6.9 like ubuntu18 default to run your skript.I have noticed that in the embree.sh you have used "pip install" so I just wonder whether to use python2 or python3. 2.Another problem is the trimesh can not detect embree engine even if I have run the embree.sh and installed pyembree successfully.I just can not resolve the api of "trimesh.ray.has_embree", I can also not found it in the official API document and source code. Can I just comment and ignore it?

    opened by JianWu313 3
  • Accurate mapping while localizing

    Accurate mapping while localizing

    Hi @nachovizzo,

    Thank you for your awesome work and open source code!

    From the paper and the code I think is is possible to use PUMA in online localization mode (as mention in the Section IV-E: "Registration Algorithm") However, I am wondering if it is possible to create an aligned map at the same time and continue mapping even if there is not anymore a map available. Does PUMA already do that? If you have any advice for this question, I will be very grateful!

    Thanks in advance!

    opened by MigVega 2
  • Metrics calculation source code

    Metrics calculation source code

    Good afternoon! Thank you very much for publishing the source code! You have done a great job and proposed a very interesting SLAM approach. I am particularly interested in your approach to map quality evaluation, it seems to me it is very promising. I would like to reproduce your metrics and evaluate other mapping algorithms. Could you please share the source code for map quality estimation?

    opened by VitalyyBezuglyj 2
  • question about build_gt_cloud.py

    question about build_gt_cloud.py

    I want to generate the gt point cloud of the street and compare it with the reconstructed model. But when I execute the following code, there are some errors. If you can, please help solve the following problem. I will appreciate it very much! docker-compose run --rm apps bash -c '\ ./pipelines/mapping/build_gt_cloud.py \ --dataset ./data/kitti-odometry/ply \ --sequence 07 \ --n_scans 40 ' a579f9a863da140b98bfe4452722b1e If this method doesn't work, can you tell me how to generate street gt point cloud?

    opened by qixuema 2
  • Using docker, PermissionError: [Errno 13] Permission denied: './data/kitti-odometry' ERROR: 1

    Using docker, PermissionError: [Errno 13] Permission denied: './data/kitti-odometry' ERROR: 1

    Dear authors, Thank you for your birlliant work. I am trying to use it based on docker, (linux ubuntu18.04). I set the $DATASETS as followed:

    ruanjy@ruanjy:~/Workspace/noRosCode_ws/puma/apps$ echo $DATASETS
    /home/ruanjy/HDD/dataset_all/kitti_odometry/data_odometry_velodyne
    

    and when I run (change the path a liitle) :

    ruanjy@ruanjy:~/Workspace/noRosCode_ws/puma$ docker-compose run --rm apps bash -c '\
    >     ./data_conversion/bin2ply.py \
    >     --dataset $DATASETS/dataset/ \
    >     --out_dir ./data/kitti-odometry/ply/ \
    >     --sequence 07
    >     '
    

    I get follwing error:

    Creating puma_apps_run ... done
    Converting .bin scans into .ply fromat from:/data//dataset/ to:./data/kitti-odometry/ply/
    Traceback (most recent call last):
      File "./data_conversion/bin2ply.py", line 111, in <module>
        main()
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
        return self.main(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
        rv = self.invoke(ctx)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
        return callback(*args, **kwargs)
      File "./data_conversion/bin2ply.py", line 86, in main
        os.makedirs(out_dir, exist_ok=True)
      File "/usr/lib/python3.8/os.py", line 213, in makedirs
        makedirs(head, exist_ok=exist_ok)
      File "/usr/lib/python3.8/os.py", line 223, in makedirs
        mkdir(name, mode)
    PermissionError: [Errno 13] Permission denied: './data/kitti-odometry'
    ERROR: 1
    

    I find that I can write something into the folder /data with root, but I fail to run the command again with 'sudo', I think that is because the &DATASETS will be reset. When I switch to root by 'su', I get the same error as before. I check my user-id and group id are 1000.

    Can you help me to figure out the problem? Thank you very much!

    opened by RuanJY 2
  • pip install --user . with Error: No module named 'pip' & No module named 'pybind11'

    pip install --user . with Error: No module named 'pip' & No module named 'pybind11'

    When running "pip install --user . ". I got these error messages, has anyone met this before?

    (occu) zlq@qh1:/mnt3/zlq/code/puma$ pip install pybind11
    Requirement already satisfied: pybind11 in /home/zlq/anaconda3/envs/occu/lib/python3.7/site-packages (2.10.0)
    
    (occu) zlq@qh1:/mnt3/zlq/code/puma$ pip install --user .
    Processing /mnt3/zlq/code/puma
      Installing build dependencies ... done
      Getting requirements to build wheel ... error
      error: subprocess-exited-with-error
      
    Getting requirements to build wheel did not run successfully.
    exit code: 1
    > [27 lines of output]
          Traceback (most recent call last):
            File "/home/zlq/anaconda3/envs/occu/bin/pip", line 5, in <module>
              from pip._internal.cli.main import main
          ModuleNotFoundError: No module named 'pip'
          Traceback (most recent call last):
            File "/home/zlq/anaconda3/envs/occu/bin/pip", line 5, in <module>
              from pip._internal.cli.main import main
          ModuleNotFoundError: No module named 'pip'
          256
          256
          Traceback (most recent call last):
            File "/home/zlq/anaconda3/envs/occu/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
              main()
            File "/home/zlq/anaconda3/envs/occu/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
              json_out['return_val'] = hook(**hook_input['kwargs'])
            File "/home/zlq/anaconda3/envs/occu/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel
              return hook(config_settings)
            File "/tmp/pip-build-env-9rghrs16/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
              return self._get_build_requires(config_settings, requirements=['wheel'])
            File "/tmp/pip-build-env-9rghrs16/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
              self.run_setup()
            File "/tmp/pip-build-env-9rghrs16/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 483, in run_setup
              self).run_setup(setup_script=setup_script)
            File "/tmp/pip-build-env-9rghrs16/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 335, in run_setup
              exec(code, locals())
            File "<string>", line 8, in <module>
          ModuleNotFoundError: No module named 'pybind11'
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: subprocess-exited-with-error
    
    Getting requirements to build wheel did not run successfully.
    exit code: 1
    See above for output.
    
    note: This error originates from a subprocess, and is likely not a problem with pip.
    
    
    opened by lqzhao 1
Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

Pengwei Zhou 183 Dec 1, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Project Page | Paper | Data This repository contains an implementatio

Lior Yariv 521 Dec 30, 2022
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. Langlois, P. A., Boulch, A., & Marlet, R. In 2019 Internati

null 85 Jan 4, 2023
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 89 Jan 5, 2023
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Gengshan Yang 59 Nov 25, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 4, 2023
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 136 Dec 12, 2022
Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport

Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport This GitHub page provides code for reproducing the results i

Andrew Zammit Mangion 1 Nov 8, 2021
Finite difference solution of 2D Poisson equation. Can handle Dirichlet, Neumann and mixed boundary conditions.

Poisson-solver-2D Finite difference solution of 2D Poisson equation Current version can handle Dirichlet, Neumann, and mixed (combination of Dirichlet

Mohammad Asif Zaman 34 Dec 23, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 7, 2022
Copy Paste positive polyp using poisson image blending for medical image segmentation

Copy Paste positive polyp using poisson image blending for medical image segmentation According poisson image blending I've completely used it for bio

Phạm Vũ Hùng 2 Oct 19, 2021
[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver

Shape As Points (SAP) Paper | Project Page | Short Video (6 min) | Long Video (12 min) This repository contains the implementation of the paper: Shape

null 394 Dec 30, 2022
This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels].

CGPN This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels]. Req

null 10 Sep 12, 2022
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022