Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Overview

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022)

Personal Web Pages | Paper | Project Page

This repository contains the code to reproduce the results from the paper. Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code or paper useful, please consider citing

@inproceedings{On-SurfacePriors,
    title = {Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors},
    author = {Baorui, Ma and Yu-Shen, Liu and Zhizhong, Han},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2022}
}

Surface Reconstruction Demo

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called tf using

conda env create -f tf.yaml
conda activate tf

Training

You can train a new network from pre-train On-Surface Prior Networks, run

python onSurPrior.py --data_dir ./data/ --out_dir ./train_net/ --CUDA 0 --INPUT_NUM 500 --epoch 30000 --input_ply_file input.ply --train

You should put the point cloud file(--input_ply_file, only ply format) into the '--out_dir' folder, '--INPUT_NUM' is the number of points in the '--input_ply_file'.

Test

You can extract the mesh model from the trained network, run

python onSurPrior.py --data_dir ./data/ --out_dir ./train_net/ --CUDA 0 --INPUT_NUM 500 --epoch 30000 --input_ply_file input.ply --test
Issues
  • Results are not like the paper

    Results are not like the paper

    So, I am trying to convert my point cloud to a mesh using your code. There seems to be some problem in the implementation i think

    here's my point cloud: test_100k.ply.zip

    Screenshot 2022-05-15 at 7 31 59 PM

    I've run your code with

    python onSurPrior.py --data_dir ./data/ --out_dir ./train_net/ --CUDA 0 --INPUT_NUM 102906 --epoch 30000 --input_ply_file test_100k.ply --train
    
    epoch: 25000 epoch loss: 6.665881e-05 loss_sdf: 2.4980092e-05 move loss: 0.0002083936
    epoch: 25500 epoch loss: 0.000111603404 loss_sdf: 2.5077257e-05 move loss: 0.00043263074
    epoch: 26000 epoch loss: 0.00020120309 loss_sdf: 2.490602e-05 move loss: 0.0008814853
    epoch: 26500 epoch loss: 0.00021180889 loss_sdf: 2.4966e-05 move loss: 0.00093421445
    epoch: 27000 epoch loss: 5.7760502e-05 loss_sdf: 2.4734798e-05 move loss: 0.00016512853
    epoch: 27500 epoch loss: 4.7512913e-05 loss_sdf: 2.4716563e-05 move loss: 0.00011398175
    epoch: 28000 epoch loss: 0.00021200601 loss_sdf: 2.4922432e-05 move loss: 0.00093541783
    epoch: 28500 epoch loss: 0.00020918738 loss_sdf: 2.4982723e-05 move loss: 0.0009210232
    epoch: 29000 epoch loss: 6.323241e-05 loss_sdf: 2.4784342e-05 move loss: 0.00019224035
    epoch: 29500 epoch loss: 4.3345903e-05 loss_sdf: 2.486095e-05 move loss: 9.242476e-05
    save model
    run_time: 4451.92893910408
    

    it seems to train file with loss decreasing

    but the mesh output is not good

    i've run the test part with

    python onSurPrior.py --data_dir ./data/ --out_dir ./train_net/ --CUDA 0 --INPUT_NUM 102906 --epoch 30000 --input_ply_file test_100k.ply --test
    

    but there is an error

    g_points_knn: Tensor("GatherV2_1:0", shape=(1, 4096, 50, 3), dtype=float32)
    g_points_knn: Tensor("Reshape_12:0", shape=(4096, 50, 3), dtype=float32)
    rotate_p: Tensor("Tile_1:0", shape=(4096, 50, 3), dtype=float32)
    feature_f: Tensor("pointnet_1/Relu:0", shape=(4096, 512), dtype=float32)
    pointnet: Tensor("pointnet_1/dense_1/BiasAdd:0", shape=(4096, 512), dtype=float32)
    feature_f: Tensor("pointnet_2/Relu:0", shape=(4096, 512), dtype=float32)
    pointnet: Tensor("pointnet_2/dense_1/BiasAdd:0", shape=(4096, 512), dtype=float32)
    feature_bs: (2000, 2000)
    test start
    256
    [1.05      1.4866935 0.773277 ]
    [-0.05 -0.05 -0.05]
    max_min: 0.00043043494 -0.0009069443 3.9753166e-05
    0 16777216
    Traceback (most recent call last):
      File "onSurPrior.py", line 832, in <module>
        vertices, triangles, _, _ = marching_cubes_lewiner(vox, thresh)
      File "/home/ubuntu/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/skimage/measure/_marching_cubes_lewiner.py", line 135, in marching_cubes_lewiner
        raise ValueError("Surface level must be within volume data range.")
    ValueError: Surface level must be within volume data range.
    

    i believe this is due to the output being larger than the voxel specified, so i made the following change

    bd_max  = np.asarray(bd_max) + 0.05 * 20
    bd_min = np.asarray(bd_min) - 0.05 * 20
    

    this does not give the surface level must be within volume data range error, but the output is wrong

    Screenshot 2022-05-15 at 7 43 06 PM

    So i removed the bd_max and bd_min change, and set voxel size to 128 and run on a different system. and this was the output, still its not good.

    Screenshot 2022-05-15 at 7 32 20 PM

    also i tried to change the voxel size from 128 to 256, but that doesn't help

    I'm not sure what i am doing wrong. could there be some issue with my GPU ? I'm using a V100 (32GB) , i tried on a RTX 3090, but that had different issues:

    (/job:localhost/replica:0/task:0/device:GPU:0 with 22255 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:06:00.0, compute capability: 8.6)
    feature_bs: (2000, 2000)
    train start
    2022-05-15 11:09:50.715829: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
    2022-05-15 11:11:18.078192: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
        return fn(*args)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
        target_list, run_metadata)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(4096, 3), b.shape=(3, 512), m=4096, n=512, k=3
    	 [[{{node global/dense_1/MatMul}}]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "onSurPrior.py", line 746, in <module>
        sess.run([loss_optim],feed_dict={input_points_3d:input_points_2d_bs,feature_object:feature_bs_t,points_target_sparse:knn_bs})
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
        run_metadata_ptr)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
        run_metadata)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(4096, 3), b.shape=(3, 512), m=4096, n=512, k=3
    	 [[node global/dense_1/MatMul (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
    

    NOTE: I'm using the same conda environment tf.yml

    opened by satyajitghana 3
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

null 139 Jun 3, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 114 Jun 26, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 17 May 25, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 38 May 23, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 145 Jun 21, 2022
Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Shilong Zhang 114 Jun 23, 2022
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 73 Jun 8, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 52 Jun 17, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 89 Jun 1, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 37 Jun 22, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 296 Jun 9, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 113 May 6, 2022
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Casual GAN Papers 120 Jun 25, 2022
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 195 Jun 23, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

null 79 Jun 13, 2022
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Idan Achituve 61 May 30, 2022
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

Markus Schütz 379 Jun 20, 2022