source code the paper Fast and Robust Iterative Closet Point.

Overview

Fast-Robust-ICP

This repository includes the source code the paper Fast and Robust Iterative Closet Point.

Authors: Juyong Zhang, Yuxin Yao, Bailin Deng.

This code is protected under patent. It can be only used for research purposes. If you are interested in business purposes/for-profit use, please contact Juyong Zhang (the author, email: [email protected]).

This code was written by Yuxin Yao. If you have questions, please contact [email protected].

Compilation

The code is compiled using CMake and requires Eigen. It has been tested on Ubuntu 16.04 with gcc 5.4.0 and on Windows with Visual Studio 2015.

Follow the following steps to compile the code:

  1. Make sure Eigen is installed. We recommend version 3.3+.

    • Download Eigen from eigen.tuxfamily.org and extract it into a folder 'eigen' within the 'include' folder. Make sure the files 'include/eigen/Eigen/Dense' and 'include/eigen/unsupported/Eigen/MatrixFunctions' can be found
    • Alternatively: On Ubuntu, use the command "apt-get install libeigen3-dev" to install Eigen.
  2. Create a build folder 'build' within the root directory of the code

  3. Run cmake to generate the build files inside the build folder, and compile the source code:

    • On linux, run the following commands within the build folder:
    $ cmake -DCMAKE_BUILD_TYPE=Release ..
    $ make
    
    • On windows, use the cmake GUI to generate a visual studio solution file, and build the solution.
  4. Afterwards, there should be an exectuable file 'FRICP' generated.

Usage

The program is run with four input parameters:

  1. an input file storing the source point cloud;
  2. an input file storing the target point cloud;
  3. an output path storing the registered source point cloud and transformation;
  4. registration method:
0: ICP
1: AA-ICP
2: Ours (Fast ICP)
3: Ours (Robust ICP)
4: ICP Point-to-plane
5: Our (Robust ICP point-to-plane)
6: Sparse ICP
7: Sparse ICP point-to-plane

You can ignore the last parameter, in which case Ours (Robust ICP) will be used by default.

Example:

$ ./FRICP ./data/target.ply ./data/source.ply ./data/res/ 3

But obj and ply (Non-binary encoding) files are supported.

Initialization support

If you have an initial transformation that can be applied on the input source model to roughly align with the input target model, you can set use_init=true and set file_init to the initial file name in main.cpp . The format of the initial transformation is a 4x4 matrix([R, t; 0, 1]), where R is a 3x3 rotation matrix and t is a 3x1 translation vector. These numbers are stored in 4 rows, and separated by spaces in each row. This format is the same as the output transformation of this code. It is worth mentioning that this code will align the center of gravity of the initial source and target models by default before starting the registration process, but this operation will be no longer used when the initial transformation is provided. In our experiment, we directly use the output file of transformation matrix generated by Super4PCS as the initial file.

Citation

Please cite the following papers if it helps your research:

@article{zhang2021fast,
  author={Juyong Zhang and Yuxin Yao and Bailin Deng},
  title={Fast and Robust Iterative Closest Point}, 
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  year={2021},
  volume={},
  number={},
  pages={1-1}}

Acknowledgements

The code is adapted from the Sparse ICP implementation released by the authors.

Comments
  • Points on a line as the result

    Points on a line as the result

    Hi,

    thank you for this wonderful work. I'm testing this code on my own data. Despite the target and source input (meshes) have already a roughly good overlay, I got many points on a line as the registered result. I upload these two files in this drive. Thank you for your clarification in advance! https://drive.google.com/drive/folders/1Mt26annG3yEZFbAU-xGgik6O0UDAxdC6?usp=sharing

    opened by MoyGcc 3
  • Initial setup

    Initial setup

    Thank you for the work! I'm currently trying to setup the environment to test the Fast-Robust-ICP.

    I downloaded Cmake and Eigen using following command lines: sudo apt install cmake (3.16.3 version) sudo apt-get install libeigen3-dev (3.3.7-2 version)

    I got the following result when compiling the repo with "cmake .":

    -- Found NanoFlann: /home/kidpaul/Fast-Robust-ICP/include ln: '/home/kidpaul/Fast-Robust-ICP/data' and './data' are the same file -- Configuring done -- Generating done -- Build files have been written to: /home/kidpaul/Fast-Robust-ICP

    The issue is that I'm not sure whether I compiled the files properly since I can't execute the command ./FRICP. (I just got cmake_install.cmake file when I ran the cmake command and it doesn't look a right compiling process)

    Could you give me some further guide how to setup the environment? (I'm running this under Ubuntu 20.04. Not sure whether this can be a possible issue)

    opened by kidpaul94 2
  • error C2338: INVALID_MATRIX_TEMPLATE_PARAMETERS

    error C2338: INVALID_MATRIX_TEMPLATE_PARAMETERS

    When i debug the code,there are several errors occurred in the program error C2719: “m”: The parameter with __declspec(align('16')) will not be aligned in icp.h error C2719: “v”: The parameter with __declspec(align('16')) will not be aligned in icp.h The error is that the variable w is defined as vectorx, but the parameter type of the function ( get_energy) is vectorxd。in file FRICP.h When I changed vectorx to vectorxd, the second error occurred,error C2338: INVALID_MATRIX_TEMPLATE_PARAMETERS The program does not prompt me which line is wrong,in file FRICP.h The compilation platform is windows vs2013, releasex86,could you help me solve these problems。thanks.

    opened by luck-boy1994 1
  • 测试数据出错

    测试数据出错

    Assertion failed: (i>=0) && ( ((BlockRows==1) && (BlockCols==XprType::ColsAtCompileTime) && i<xpr.rows()) ||((BlockRows==XprType::RowsAtCompileTime) && (BlockCols==1) && i<xpr.cols())), file d:\eigen-3.4.0\eigen\src\core\block.h, line 122。用我自己的数据测试您的算法出现这个错误,请问是什么原因呢,该怎么解决呢?

    opened by huangliang666 0
  • 所有point-to-plane方法都报错

    所有point-to-plane方法都报错

    作者您好,我用我自己的数据测试发现所有的point-to-plane(第4、5、7方法)算法都报错,Assertion failed: (i>=0) && ( ((BlockRows==1) && (BlockCols==XprType::ColsAtCompileTime) && i<xpr.rows()) ||((BlockRows==XprType::RowsAtCompileTime) && (BlockCols==1) && i<xpr.cols())), file e:\pcl\pcl 1.8.1\3rdparty\eigen\eigen3\eigen\src\core\block.h, line 122,我的两个点云数目都为2万左右,scale为686.818,请问作者这是什么原因呢?该怎么解决呢?

    opened by huangliang666 4
  • compile error eigen -> eigen3 and No matching fct nanoflann::KDTreeSingleIndexAdaptorParams

    compile error eigen -> eigen3 and No matching fct nanoflann::KDTreeSingleIndexAdaptorParams

    First error: bugfix in type.h and FRICP.h change entry: #include <eigen/...> to #include <eigen3/...>

    second error: Fast-Robust-ICP/ICP.h:44:57: error: no matching function for call to ‘nanoflann::KDTreeSingleIndexAdaptorParams::KDTreeSingleIndexAdaptorParams(const int&, const size_t&)’ index = new index_t(dims, *this, nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size, dims)); solution is open, ...

    opened by busybeaver42 2
Owner
yaoyuxin
yaoyuxin
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration

Geometric Transformer for Fast and Robust Point Cloud Registration PyTorch imple

Zheng Qin 220 Jan 5, 2023
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

null 182 Dec 23, 2022
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

null 76 Jan 3, 2023
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 9, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
LBK 35 Dec 26, 2022
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 5, 2023
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Qiangeng Xu 662 Jan 1, 2023
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022