Semi-supervised Implicit Scene Completion from Sparse LiDAR

Related tags

Deep Learning SISC
Overview

Semi-supervised Implicit Scene Completion from Sparse LiDAR

Paper

Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZHANG from Institute for AI Industry Research(AIR), Tsinghua University.

demo

For complete video, click HERE.

teaser

sup0

sup1

sup2

sup3

sup4

Introduction

Recent advances show that semi-supervised implicit representation learning can be achieved through physical constraints like Eikonal equations. However, this scheme has not yet been successfully used for LiDAR point cloud data, due to its spatially varying sparsity.

In this repository, we develop a novel formulation that conditions the semi-supervised implicit function on localized shape embeddings. It exploits the strong representation learning power of sparse convolutional networks to generate shape-aware dense feature volumes, while still allows semi-supervised signed distance function learning without knowing its exact values at free space. With extensive quantitative and qualitative results, we demonstrate intrinsic properties of this new learning system and its usefulness in real-world road scenes. Notably, we improve IoU from 26.3% to 51.0% on SemanticKITTI. Moreover, we explore two paradigms to integrate semantic label predictions, achieving implicit semantic completion. Codes and data are publicly available.

Citation

If you find our work useful in your research, please consider citing:

###to do###

Installation

Requirements

CUDA=11.1
python>=3.8
Pytorch>=1.8
numpy
ninja
MinkowskiEngine
tensorboard
pyyaml
configargparse
scripy
open3d
h5py
plyfile
scikit-image

Clone the repository:

git clone https://github.com/OPEN-AIR-SUN/SISC.git

Data preparation

Download the SemanticKITTI dataset from HERE. Unzip it into the same directory as SISC.

Training and inference

The configuration for training/inference is stored in opt.yaml, which can be modified as needed.

Scene Completion

Run the following command for a certain task (train/valid/visualize):

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 main_sc.py --task=[task] --experiment_name=[experiment_name]

Semantic Scene Completion

SSC option A

Run the following command for a certain task (ssc_pretrain/ssc_valid/train/valid/visualize):

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 main_ssc_a.py --task=[task] --experiment_name=[experiment_name]

Here, use ssc_pretrain/ssc_valid to train/validate the SSC part. Then the pre-trained model can be used to further train the whole model.

SSC option B

Run the following command for a certain task (train/valid/visualize):

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 main_ssc_b.py --task=[task] --experiment_name=[experiment_name]

Model Zoo

Our pre-trained models can be downloaded here:

Ablation Pretrained Checkpoints
data augmentation no aug rotate & flip
Dnet input radial distance radial distance & height
Dnet structure last1 pruning last2 pruning last3 pruning last4 pruning Dnet relu 4convs output
Gnet structure width128 depth4 width512 depth4 width256 depth3 width256 depth5 Gnet relu
point sample on:off=1:2 on:off=2:3
positional encoding no encoding incF level10 incT level5 incT level15
sample strategy nearest
scale size scale 2 scale 4 scale 8 scale 16 scale 32
shape size shape 128 shape 512
SSC SSC option A SSC option B

These models correspond to the ablation study in our paper. The Scale 4 works as our baseline.

Comments
  • Code for generating mesh

    Code for generating mesh

    Hello,

    Thank you for a nice work! I am trying to generate the mesh from SDF but I cannot get a clean mesh as in your visualizations. Could you provide the code for generating the mesh?

    Thank you very much, Anh-Quan Cao

    opened by anhquancao 1
  • Mapping weights to configs

    Mapping weights to configs

    Nice work! I've tried to run your method with the provided pretrained models but I got errors about mismatching between the model and the loaded weights. Can you please provide a sample config files for some of your pretrained models?

    Thanks!

    Traceback (most recent call last):
      File "main_sc.py", line 670, in <module>
        main()
      File "main_sc.py", line 666, in main
        visualize(opt, config, expr_path)
      File "main_sc.py", line 595, in visualize
        G_model.module.load_state_dict(checkpoint['G_model'])
      File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for G_siren:
    	Missing key(s) in state_dict: "G_model.net.net.0.0.weight", "G_model.net.net.0.0.bias", "G_model.net.net.1.0.weight", "G_model.net.net.1.0.bias", "G_model.net.net.2.0.weight", "G_model.net.net.2.0.bias", "G_model.net.net.3.0.weight", "G_model.net.net.3.0.bias", "G_model.net.net.4.0.weight", "G_model.net.net.4.0.bias". 
    	Unexpected key(s) in state_dict: "net.net.0.0.weight", "net.net.0.0.bias", "net.net.1.0.weight", "net.net.1.0.bias", "net.net.2.0.weight", "net.net.2.0.bias", "net.net.3.0.weight", "net.net.3.0.bias", "net.net.4.0.weight", "net.net.4.0.bias". 
    ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 467) of binary: /usr/bin/python3
    
    
    
    
    opened by YoushaaMurhij 1
  • Where is the link of pretrained ckpt?

    Where is the link of pretrained ckpt?

    Hi, @Philipflyg . Thanks for your sharing of your work, but threre is no link in the pretrained ckpt table. I tried to train the ssc_pretrain task, which need more than 10 hours/epoch and 1000 hours whole training pipeline to reproduce. May you give me a link to the pretrain ckpt that can be downloaded?

    I misundestood the processbar, maybe total training pipeline cost 11 hours.

    opened by LeopoldACC 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required.

    ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required.

    Nice work! I've tried to run your method with the provided pretrained models but I got errors. Do you Know how to solve it? Traceback (most recent call last): File "main_ssc_a.py", line 1248, in main() File "main_ssc_a.py", line 1244, in main visualize(opt, config, expr_path) File "main_ssc_a.py", line 1174, in visualize visualize_pipeline(D_Seg=D_Seg, D_SSC=D_SSC,
    File "main_ssc_a.py", line 1122, in visualize_pipeline iou_out = evals.scene_save_ssc_a(G_siren, shape_out[0], class_out, raw, label, mask, config, model_dir, indices[0]) File "/home/SISC/evals.py", line 353, in scene_save_ssc_a convert_sdf_samples_to_ply( File "/home/SISC/evals.py", line 595, in convert_sdf_samples_to_ply indexes = k_neigh.kneighbors(mesh_points, return_distance=False).squeeze() File "/opt/conda/lib/python3.8/site-packages/sklearn/neighbors/_base.py", line 670, in kneighbors X = check_array(X, accept_sparse='csr') File "/opt/conda/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f return f(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/sklearn/utils/validation.py", line 669, in check_array raise ValueError("Found array with %d sample(s) (shape=%s) while a" ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required. Killing subprocess 46136 Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in main() File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'main_ssc_a.py', '--local_rank=0', '--task=visualize', '--experiment_name=test-vis']' returned non-zero exit status 1.

    opened by WkangLiu 2
Owner
null
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 4, 2021
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022 [Project page | Video] Getting sta

null 51 Nov 29, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 7, 2023
A simple algorithm for extracting tree height in sparse scene from point cloud data.

TREE HEIGHT EXTRACTION IN SPARSE SCENES BASED ON UAV REMOTE SENSING This is the offical python implementation of the paper "Tree Height Extraction in

null 6 Oct 28, 2022
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

null 102 Dec 25, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

null 224 Dec 28, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 33 Dec 12, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object compositions and views.

null 151 Dec 26, 2022