A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

Overview

Lidar with Velocity

A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

scanningPattern

vel_projrelated paper: Lidar with Velocity : Motion Distortion Correction of Point Clouds fromOscillating Scanning Lidars arXiv

1. Prerequisites

1.1 Ubuntu and ROS. Tested on Ubuntu 18.04. ROS Melodic

1.2 Eigen

1.3 Ceres Solver

1.4 Opencv

2. Build on ROS

Clone the repository and catkin_make:

cd ~/catkin_ws/src
git clone https://github.com/ISEE-Technology/lidar-with-velocity
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash

3. Directly run

First download our dataset data and extract in /catkin_ws/ path.

replace the "DATASET_PATH" in config/config.yaml with your extracted dataset path, example: (notice the "/")

dataset_path: YOUR_CATKIN_WS_PATH/catkin_ws/data/

replace the "CONFIG_YAML_PATH" with your config.yaml file path, example:

"YOUR_CATKIN_WS_PATH/catkin_ws/src/lidar-with-velocity/config.yaml"

Then follow the commands blow :

roscore
rviz -d src/lidar-with-velocity/rviz-cfg/vis.rviz
rosrun lidar-with-velocity main_ros

there will be a Rviz window and a PCL Viewer window to show the results, press key "space" to process the next frame.

Comments
  • The result using your dataset

    The result using your dataset

    Hi! I test your code with your dataset, but the result is not as good as that in paper. Is there anything wrong when I use the code? image image As you can see, the distortion of point cloud is not completely compensated.

    opened by Psyclonus2887 5
  • How about the distortion affect detection?

    How about the distortion affect detection?

    Hi, nice work!

    But I have some questions, looking forward your reply.

    First of all, the distortion pointcloud may affect the object detection, I think that is the main reason why we must do pointcloud distortion. But in your paper, I understand you do undistortion after the object and MOT, did the reason and result revert?

    And the second, throuh we are already get the detection an tracking result, why we need do undistortion then? May be use the boost idea to refine detection result? But I do really think it is cost. Is there any evaluation about the time and computing resource costing?

    And the third, I think the reason why you need camera, may be promote the detection ap or tracking performance. But if the pointcloud is distorted, if the calibration precision will be affected? espically for the long distance(may be 50 miles or 100 miles away?) we known the calibration precision will really low, and then the the calibration precision may be lower!

    Thx again, the first question really confuse me and I really hope your help.

    opened by mjjdick 2
  • A question about the results in the paper

    A question about the results in the paper

    Hello, I'm interested in your work! While reading your paper, I have a question that I hope I can get an answer from you. In the results of your system (Fig 9,especially the two pictures on the left), I noticed there are some noise at the rail of the green point cloud (corrected point cloud). I would like to ask if this noise is caused by the motion distortion correction or the inaccurate object detection/clustering results? Thanks!

    您好,我对你们的工作很感兴趣! 在阅读你们的论文时,我有一个问题,希望可以得到你们的解答。 在你们的系统输出结果中(Fig 9,左边两张图比较明显),我注意到了在绿色的点云(矫正后的点云)的尾部有一些噪点,我想请问下这个噪点是因为畸变矫正产生的还是因为检测/分割的不够准确而产生的呢? 感谢!

    opened by Liuyaqi99 2
  • The maximum frames can be merged

    The maximum frames can be merged

    Thank you for your excellent work! From the paper it seems the experiments are done in the consecutive 3 frames of point cloud from Livox Horizon. How much frames can be merged to the maximum? Is that corresponding to the velocity and acceleration of the moving object in the scene?

    opened by Psyclonus2887 1
  • What if there are objects turning left or right?

    What if there are objects turning left or right?

    Thank you for your work! It seems the method can only work when the object is moving along its axis. If the object is turning left or right, can this method be used to fix the distortion?

    opened by Psyclonus2887 0
Owner
ISEE Research Group
ISEE Research Group @ SUSTech
ISEE Research Group
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 1, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 6, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 6, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 3, 2023
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022
FrankMocap: A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

FrankMocap pursues an easy-to-use single view 3D motion capture system developed by Facebook AI Research (FAIR). FrankMocap provides state-of-the-art 3D pose estimation outputs for body, hand, and body+hands in a single system. The core objective of FrankMocap is to democratize the 3D human pose estimation technology, enabling anyone (researchers, engineers, developers, artists, and others) can easily obtain 3D motion capture outputs from videos and images.

Facebook Research 1.9k Jan 7, 2023
JumpDiff: Non-parametric estimator for Jump-diffusion processes for Python

jumpdiff jumpdiff is a python library with non-parametric Nadaraya─Watson estimators to extract the parameters of jump-diffusion processes. With jumpd

Rydin 28 Dec 10, 2022
An energy estimator for eyeriss-like DNN hardware accelerator

Energy-Estimator-for-Eyeriss-like-Architecture- An energy estimator for eyeriss-like DNN hardware accelerator This is an energy estimator for eyeriss-

HEXIN BAO 2 Mar 26, 2022
SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging.

SweiNet SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging. SweiNet takes as in

Felix Jin 3 Mar 31, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Back to the Feature with PixLoc We introduce PixLoc, a neural network for end-to-end learning of camera localization from an image and a 3D model via

Computer Vision and Geometry Lab 610 Jan 5, 2023
LBK 35 Dec 26, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 136 Jan 3, 2023
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022