Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Overview

Point Cloud Denoising

input segmentation output
#9F1924 raw point-cloud #9E9E9E valid/clear #7300E6 fog #009999 rain #6EA046 de-noised

Abstract

Lidar sensors are frequently used in environment perception for autonomous vehicles and mobile robotics to complement camera, radar, and ultrasonic sensors. Adverse weather conditions are significantly impacting the performance of lidar-based scene understanding by causing undesired measurement points that in turn effect missing detections and false positives. In heavy rain or dense fog, water drops could be misinterpreted as objects in front of the vehicle which brings a mobile robot to a full stop. In this paper, we present the first CNN-based approach to understand and filter out such adverse weather effects in point cloud data. Using a large data set obtained in controlled weather environments, we demonstrate a significant performance improvement of our method over state-of-the-art involving geometric filtering.

Download Dataset

Information: Click here for registration and download.

Dataset Information

  • each channel contains a matrix with 32x400 values, ordered in layers and columns
  • the coordinate system is based on the conventions for land vehicles DIN ISO 8855 (Wikipedia)
hdf5 channels info
labels_1 groundtruth labels, 0: no label, 100: valid/clear, 101: rain, 102: fog
distance_m_1 distance in meter
intensity_1 raw intensity of the sensor
sensorX_1 x-coordinates in a projected 32x400 view
sensorY_1 y-coordinates in a projected 32x400 view
sensorZ_1 z-coordinates in a projected 32x400 view
hdf5 attributes info
dateStr date of the recording yyyy-mm-dd
timeStr timestamp of the recording HH:MM:SS
meteorologicalVisibility_m ground truth meteorological visibility in meter provided by the climate chamber
rainfallRate_mmh ground truth rainfall rate in mm/h provided by the climate chamber
# example for reading the hdf5 attributes
import h5py
with h5py.File(filename, "r", driver='core') as hdf5:
  weather_data = dict(hdf5.attrs)

Getting Started

We provide documented tools for visualization in python using ROS. Therefore, you need to install ROS and the rospy client API first.

  • install rospy
apt install python-rospy  

Then start "roscore" and "rviz" in separate terminals.

Afterwards, you can use the visualization tool:

  • clone the repository:
cd ~/workspace
git clone https://github.com/rheinzler/PointCloudDeNoising.git
cd ~/workspace/PointCloudDeNoising
  • create a virtual environment:
mkdir -p ~/workspace/PointCloudDeNoising/venv
virtualenv --no-site-packages -p python3 ~/workspace/PointCloudDeNoising/venv
  • source virtual env and install dependencies:
source ~/workspace/PointCloudDeNoising/venv/bin/activate
pip install -r requirements.txt
  • start visualization:
cd src
python visu.py

Notes:

  • We used the following label mapping for a single lidar point: 0: no label, 100: valid/clear, 101: rain, 102: fog
  • Before executing the script you should change the input path

Reference

If you find our work on lidar point-cloud de-noising in adverse weather useful for your research, please consider citing our work.:

@article{PointCloudDeNoising2020, 
  author   = {Heinzler, Robin and Piewak, Florian and Schindler, Philipp and Stork, Wilhelm},
  journal  = {IEEE Robotics and Automation Letters}, 
  title    = {CNN-based Lidar Point Cloud De-Noising in Adverse Weather}, 
  year     = {2020}, 
  keywords = {Semantic Scene Understanding;Visual Learning;Computer Vision for Transportation}, 
  doi      = {10.1109/LRA.2020.2972865}, 
  ISSN     = {2377-3774}
}

Acknowledgements

This work has received funding from the European Union under the H2020 ECSEL Programme as part of the DENSE project, contract number 692449. We thank Velodyne Lidar, Inc. for permission to publish this dataset.

Feedback/Questions/Error reporting

Feedback? Questions? Any problems or errors? Please do not hesitate to contact us!

Comments
  • Seems like the dataset is not available?

    Seems like the dataset is not available?

    Hi all, I am wondering if there is any friend has download the dataset recently? I followed the tutorial to register in DENSE dataset webpage and get the download link via the email. However, seems like the file in the link has been removed. Please help me to find the correct method for getting the dataset:) Thanks.

    opened by giorking 7
  • Sensor Data Question

    Sensor Data Question

    First of all, thank you for sharing a great dataset.

    I have a question. The size of x,y,z of the sensor is 32x400, and can we think of 32 as the number of ring channels of LiDAR? Then, I don't know how the 400 came out, can you tell me?

    In addition, why is the intensity value of labeled points zero?

    I'll wait for your answer. Thank you.

    opened by Youngsplace0913 4
  • Dataset file pairs: Strongest or last?

    Dataset file pairs: Strongest or last?

    I noticed that some of the files in the dataset have the same name, except one has a suffix _2, as an example: LidarImage_000000001.hdf5 and LidarImage_000000001_2.hdf5. What does that mean? Are those two files describing the same point in time but one contains the strongest and the other the last returns of the lidar? If yes, which is which?

    Also, why do these pairs exist only for a part of the dataset and not all measurements?

    Thanks in advance!

    opened by ptoews 3
  • Dataset comparison and output for PixelAccurateDepthEstimation

    Dataset comparison and output for PixelAccurateDepthEstimation

    Hi

    I'm wondering if you've released the denoised output for the chamber data used in the paper Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios (https://ieeexplore.ieee.org/abstract/document/8885465/). The images in this paper look similar to the other paper. Please let me know if there is correspondence and which split I should refer to.

    Thanks!

    opened by anupamsobti 3
  • Release of the data!

    Release of the data!

    Hi, Thanks for your amazing work! The Adverse Weather has been a troubling question in the 3D field. Experiments from your paper give reasonable results for these scenes, which made me eager to try this task. Would you please release the dataset, or give us a schedule of planned releases? Any help would be greatly appreciated!

    opened by shuangjiexu 3
  • Question on data

    Question on data

    Hi Thanks for your great work !

    I recently downloaded your dataset but I came across some detailed problem when trying to re-implement your work

    How to project your point cloud data to the image plane so there is no calibration file ?

    Thanks Best Regards

    opened by russellyq 2
  • Dataset splits

    Dataset splits

    Hi,

    I'm trying to replicate experiment 2 from the paper where about 31k road samples were used. First of all, the uploaded dataset contains only 27462 road samples, so I assume the remaining ~4k are taken from the LiLaNet dataset.

    Since that one isn't public, is it possible to publish the chosen selection so that I can take that from the full dataset if I get access to it or to request the subset from the LiLaNet authors?

    Furthermore, the uploaded dataset contains only road samples in training directories. Which samples were used for evaluation?

    Lastly, where do the ~103k samples in experiment 3 come from? Is it based on the ~34k clear frames where each was augmented with both fog an rain? If yes, where do the additional ~3k frames compared to experiment 2 come from?

    Thanks in advance!

    opened by ptoews 2
  • FogChamber static scenes labels

    FogChamber static scenes labels

    In the paper it mentions that 4 realistic static setups are represented in the data from the climate chamber. I presume that the names of the folder that contain Static1, Static2, Static3, Static4 refer to this, however I noted that the train_01 folder only contains Static1 data, train_02 Static4, the test_01 folder only contains Static2 and the val_01 Static3. Is there any reasons for this? Shouldn't the training data contain data from all 4 static setups?

    opened by motapinto 1
  • Dataset splits labels

    Dataset splits labels

    What are the differences between train_01 and train_02? The train_road data is also splited into 2, while being not labeled.

    Could you explain why there is this separation and what is the correspondence for the _01 and _02 split?

    opened by motapinto 1
  • Data sources

    Data sources

    In DENSE datasets page it mentions the following:

    "For the challenging task of lidar point cloud de-noising, we rely on the Pixel Accurate Depth Benchmark and the Seeing Through Fog(STF) dataset recorded under adverse weather conditions like heavy rain or dense fog. In particular, we use the point clouds from a Velodyne VLP32c lidar sensor."

    Could you further explain please?

    In particular, my questions are the following:

    1. Is the data exclusively from those 2 sources (mentioned in the Dense webpage), except the train_road_01 and train_road_02 folders which contain the road data you collected and is not labeled, or does it add some data? This is unclear, especially since the dataset from this repository contain 6.6k clear weather frames (train, test, val folders) but the STF(Seeing Through Fog) dataset mentions using only 5.5k clear weather frames, for the real world recording, which is not used. The climate chamber data it uses contains only 364 clear weather frames according to this paper (TABLE I). So, does this mean that the other 6.3k clear weather frames data comes from the Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios dataset?
    2. What part of the data is being used from those papers. From #13 you suggest to use the STF road data data, but in the DENSE datasets page it mentions that the Seeing Through Fog dataset is being used, so I presume that only the climate chamber data of the STF dataset is included.
    3. Is the data from the climate chamber in STF and ** "Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios" ***different from each other, the exact same data, or does it contain some overlap?
    4. Of the data used from both papers/datasets, is there any change to the data from both papers, or does it contain exactly the same data, in terms of quantity, diversity, distribution?

    Essentially, I am confused with the different datasets available in DENSE webpage, and to avoid repetition I am trying to understand overlaps in the data being used. The Dense webpage contains the following datasets to download:

    • cnn_denoising (the one refering to this github repo)
    • FogChamberDataset
    • SeingThroughFog
    • pixel_accurate_depth_benchmark

    Since all these datasets are referred or mention in the paper, Dense webpage, github issues could you explain me, or point to where should I look, to solve this confusion?

    Sorry for the long question xD

    opened by motapinto 1
  • There is some doubt about the final output of the network

    There is some doubt about the final output of the network

    First of all, thank you for this excellent job. I have some questions about the final output of the network, is the final output a 32*400 matrix, the same as the labels read from the dataset, and the elements are also 0 and 100,101.102? Or is it a label manipulation or is it a matrix with the number of channels representing categories like Lilanet? Thank you。

    opened by Yueziyu 1
  • About the two dynamic scenes in  README.md

    About the two dynamic scenes in README.md

    Hi @rheinzler Thanks for your great work ! In the paper "CNN-based Lidar Point Cloud De-Noising in Adverse Weather", all of the data is static, but there are two dynamic scene inputs at the beginning of README.md.

    my questions are the following: 1.Where can I download the previous input data or similar dynamic continuous point cloud data with rain or fog? could you point to where should I look and download the data? 2.is that the input data of two dynamic scenes also in the image plane with a dimension of 32x400?

    opened by Desmond-code 0
  • How to use the unlabeled data during training?

    How to use the unlabeled data during training?

    Hello @rheinzler, I am trying to reproduce your results I have some doubts about your training and testing data.

    In the training set we have the weather chamber data with labels [100: clear, 101: rain, 102: fog] and the unlabeled data from the road scenes [0: unlabeled]. During training, how do you treat the unlabeled data? Do your remap them to the clear class [100] or you ignore them during the loss calculation?

    Similarly, in the evaluation section you mention that:

    "Note, all evaluations are based on the test data set from experiment 2, which contains autolabeled annotations and road data without fog, rain or augmentation."

    However the classes taken into consideration during the evaluation are only [100: clear, 101: rain, 102: fog]. Did you only evaluate on the weather chamber data or did you remap the unlabeled to the clear class?

    opened by aldipiroli 0
  • Strongest and latest return labels

    Strongest and latest return labels

    Hello, First of all thank you for publishing the data, which is extremely helpful for researchers.

    In #8 it was noted that the clear weather is the only one that contains both the strongest and latest return of the LiDAR, however it was not specified in the clear weather which label was be used for the strongest and latest return. Since other weather conditions use the strongest return only, I assume that in the clear weather data, the data with _2 represents the latest return and the normal files represent the strongest. Is that correct?

    opened by motapinto 0
  • Problems with loading dataset

    Problems with loading dataset

    Hi, everyone! My issue is similar to #4 I followed the tutorial to register in DENSE dataset webpage and got the download link via the email. I followed the link and was able to navigate over directories. But when I try to download any file I face the problem that file has 0 b size.

    image

    Seems, like all files were removed from the cloud. Please, help me to acquire access to the dataset. Thanks!

    opened by tamerlan-b 1
Owner
null
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Hong Wang 48 Nov 23, 2022
Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Mahmoud Afifi 22 Nov 8, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
Pythonic particle-based (super-droplet) warm-rain/aqueous-chemistry cloud microphysics package with box, parcel & 1D/2D prescribed-flow examples in Python, Julia and Matlab

PySDM PySDM is a package for simulating the dynamics of population of particles. It is intended to serve as a building block for simulation systems mo

Atmospheric Cloud Simulation Group @ Jagiellonian University 32 Oct 18, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general-purpose deep learning architecture that takes as input the entire dataset instead of processing one datapoint at a time.

OATML 360 Dec 28, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 4, 2023
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
CLEAR algorithm for multi-view data association

CLEAR: Consistent Lifting, Embedding, and Alignment Rectification Algorithm The Matlab, Python, and C++ implementation of the CLEAR algorithm, as desc

MIT Aerospace Controls Laboratory 30 Jan 2, 2023
The pure and clear PyTorch Distributed Training Framework.

The pure and clear PyTorch Distributed Training Framework. Introduction Requirements and Usage Dependency Dataset Basic Usage Slurm Cluster Usage Base

WILL LEE 208 Dec 20, 2022
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training process.

Yang Wenhan 44 Dec 6, 2022
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022