Synthetic LiDAR sequential point cloud dataset with point-wise annotations

Overview

arXiv

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud

This is official repository of the SynLiDAR dataset. For technical details, please refer to:

SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for Semantic Segmentation (Paper)

Aoran Xiao, Jiaxing Huang, Dayan Guan, Fangneng Zhan, Shijian Lu

News

[2021.Jul.28] SynLiDAR is available for download!

Dataset

SynLiDAR is a large-scale synthetic LiDAR sequential point cloud dataset with point-wise annotations. 13 sequences of LiDAR point cloud with around 20k scans (over 19 billion points and 32 semantic classes) are collected from virtual urban cities, suburban towns, neighborhood, and harbor.

image

image

image

Download (245.3GB)

  1. You can download SynLiDAR through browser → DR-NTU

  2. You can also download through provided python script, this requires installing pyDataverse

pip install pyDataverse
python download.py

Note: For most of sequences, we compressed and split them into multiple small files. Please download them and cat into one file before extraction. E.g. for sequence 01:

cat 01*>01.tar.gz
tar -zxvf 11.tar.gz

The data should organized in the following format:

/SynLiDAR/
  └── 00/
    └── velodyne
      └── 000000.bin
      ├── 000001.bin
      ...
    └── labels
      └── 000000.label
      ├── 000001.label
      ...
  ...
  └── annotations.yaml
  └── read_data.py

We provide class annotations (in 'annotations.yaml') and example python code for reading data (in 'read_data.py').

Citation

If you find our work useful in your research, please consider citing:

@article{xiao2021synlidar,  
  title={SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for Semantic Segmentation},  
  author={Xiao, Aoran and Huang, Jiaxing and Guan, Dayan and Zhan, Fangneng and Lu, Shijian},  
  journal={arXiv preprint arXiv:2107.05399},  
  year={2021}  
}  
Comments
  • Download issue

    Download issue

    image Hello,

    Thanks for your awesome work. Now I am trying to download the dataset using the provided download.py. However, as shown in the above figure, the size of all downloaded subsets could not exceed 5.9 GB, so I want to if there is any constraint on the server or any other things I should check.

    Thanks

    opened by Solacex 7
  • LiDAR sensor specification

    LiDAR sensor specification

    Thank you very much for the interesting work and providing the dataset.

    I was wondering, which LiDAR sensor you simulated (if you had a specific selected) when generating the dataset or if you have the specifications (like angles, number of beams , number of points per beam) of the sensor that you simulate.

    Thank you very much again. Best,

    opened by BjoernMichele 2
  • About the dataset used in paper

    About the dataset used in paper

    Hi, Aoran Xiao. Thank you for your brilliant work.

    I see that "-- SubDataset: uniformlly downsampled dataset (about 24GB), this is the dataset that we used in Paper. You are recommend to use this smaller dataset for faster experiments." in readme file.

    To fairly compare with your method, I would like to confirm that all the experimental results in your paper "Transfer Learning from Synthetic to Real LiDAR Point Cloud for Semantic Segmentation" use SubDataset?

    opened by dream-toy 2
  • Affine transformation matrices

    Affine transformation matrices

    Hi!

    thanks for the dataset! Each sequence is provided in the sensor frame therefore it is impossible to accurately register all the frames. Are you going to provide calibration files with affine transform matrices for registering the frames?

    Thanks in advance!

    opened by saltoricristiano 2
  • Will could provide an google cloud /baidu cloud download?

    Will could provide an google cloud /baidu cloud download?

    Hello,

    Thanks for your awesome work. Will you consider releasing this dataset on google/Baidu cloud? We found it difficult to download such a large dataset from the current server.

    Thanks.

    opened by Solacex 2
  • Could you provide more information about intensity rendering model?

    Could you provide more information about intensity rendering model?

    I noticed on you paper that "Detailed descriptions about the rendering model and experiments are presented in Appendix" but I can't find the appendix nowhere. Could you kindly provide some more information about that? Thanks!

    opened by Beastmaster 1
  • Question about the class mapping

    Question about the class mapping

    Hi, Aoran Xiao. First of all, thank you very much for the previous sharing of the mapping file. But I have a question. Why is the mapping categories of SemanticPoss in the file inconsistent with the categories given in the paper?

    opened by kongxin123456 1
  • Question about the class mapping

    Question about the class mapping

    Hi, Aoran Xiao. Firstly, thank you for your brilliant work! I see that you have given an example of class mapping from synlidar to semantickitti. And could you share the YAML file of semanticPOSS with me too? Please help.

    opened by kongxin123456 1
  • Class mapping from index to name

    Class mapping from index to name

    Dear authors,

    thank you very much for your very interesting work. In my actual project I would need to name each class index but, unfortunately, I found it is missing a mapping between class indexes [0, 31] and class names [road, .., table]. Can you please provide the mapping between index-name?

    Thank you in advance!

    opened by saltoricristiano 1
  • Lidar Pose Missing

    Lidar Pose Missing

    Dear @xiaoaoran,

    I would like to start by expressing my appreciation for your work. It is interesting and inspires the research community for UDA tasks.

    I have downloaded the dataset and found out that the Pose (the transformation matrix between local to global coordinate systems) information regarding the Ego-vehicle/Lidar Sensor is missing. Can you please point me where it is stored?

    Thank you. Looking forward to hearing from you soon.

    opened by awethaileslassie 2
  • Source Code for PCT

    Source Code for PCT

    Thank you for the interesting work.

    Would you like to share the code for PCT? Especially, the code for the appearance translation module and the sparsity translation module. Thanks!

    opened by ldkong1205 6
Owner
Ph.D. student
null
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

null 49 Dec 1, 2022
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 9, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 2, 2022
Pixel-wise segmentation on VOC2012 dataset using pytorch.

PiWiSe Pixel-wise segmentation on the VOC2012 dataset using pytorch. FCN SegNet PSPNet UNet RefineNet For a more complete implementation of segmentati

Bodo Kaiser 378 Dec 30, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 2, 2023
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 9, 2022
Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

The Hypersim Dataset For many fundamental scene understanding tasks, it is difficult or impossible to obtain per-pixel ground truth labels from real i

Apple 1.3k Jan 4, 2023
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch >= 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 8, 2022