[CVPR 2021 Oral] Variational Relational Point Completion Network

Overview

VRCNet: Variational Relational Point Completion Network

This repository contains the PyTorch implementation of the paper:

Variational Relational Point Completion Network, CVPR 2021 (Oral)

[arxiv|video|webpage]

In CVPR 2021

Real-scanned point clouds are often incomplete due to viewpoint, occlusion, and noise. Existing point cloud completion methods tend to generate global shape skeletons and hence lack fine local details. Furthermore, they mostly learn a deterministic partial-to-complete mapping, but overlook structural relations in man-made objects. To tackle these challenges, this paper proposes a variational framework, Variational Relational point Completion network (VRCNet) with two appealing properties: 1) Probabilistic Modeling. In particular, we propose a dual-path architecture to enable principled probabilistic modeling across partial and complete clouds. One path consumes complete point clouds for reconstruction by learning a point VAE. The other path generates complete shapes for partial point clouds, whose embedded distribution is guided by distribution obtained from the reconstruction path during training. 2) Relational Enhancement. Specifically, we carefully design point selfattention kernel and point selective kernel module to exploit relational point features, which refines local shape details conditioned on the coarse completion. In addition, we contribute a multi-view partial point cloud dataset (MVP dataset) containing over 100,000 high-quality scans, which renders partial 3D shapes from 26 uniformly distributed camera poses for each 3D CAD model. Extensive experiments demonstrate that VRCNet outperforms state-of-theart methods on all standard point cloud completion benchmarks. Notably, VRCNet shows great generalizability and robustness on real-world point cloud scans.

VRCNet architecture overview:

Our proposed point cloud learning modules:

Point Cloud Completion Benchmark

Moreover, this repository introduces an integrated Point Cloud Completion Benchmark implemented in Python 3.5, PyTorch 1.2 and CUDA 10.0. Supported algorithms: PCN, Topnet, MSN, Cascade, ECG and our VRCNet.

Installation

  1. Install dependencies:
  • h5py 2.10.0
  • matplotlib 3.0.3
  • munch 2.5.0
  • open3d 0.9.0
  • PyTorch 1.2.0
  • PyYAML 5.3.1
  1. Download corresponding dataset (e.g. MVP dataset)

  2. Compile PyTorch 3rd-party modules (ChamferDistancePytorch, emd, expansion_penalty, MDS, Pointnet2.PyTorch)

MVP Dataset

Please download our MVP Dataset to the folder data.

Usage

  • To train a model: run python train.py -c *.yaml, e.g. python train.py -c pcn.yaml
  • To test a model: run python test.py -c *.yaml, e.g. python test.py -c pcn.yaml
  • Config for each algorithm can be found in cfgs/.
  • run_train.sh and run_test.sh are provided for SLURM users.

Citation

If you find our code useful, please cite our paper:

@article{pan2021vrcnet,
  title={Variational Relational Point Completion Network},
  author={Pan, Liang and Chen, Xinyi and Cai, Zhongang and Zhang, Junzhe and Zhao, Haiyu and Yi, Shuai and Liu, Ziwei},
  journal={arXiv preprint arXiv:2104.10154},
  year={2021}
}

License

Our code is released under MIT License.

Acknowledgement

We include the following PyTorch 3rd-party libraries:
[1] ChamferDistancePytorch
[2] emd, expansion_penalty, MDS
[3] Pointnet2.PyTorch

We include the following algorithms:
[1] PCN
[2] MSN
[3] Topnet
[4] Cascade
[5] ECG
[6] VRCNet

Comments
  • some question about evaluate

    some question about evaluate

    Hello, I run the code several times, the result of PCN is only cd_t=0.000978, but the result of your paper is 6.02 (CD loss multiplied by 10^4). Is this result reasonable?

    opened by peng666 4
  • A problem about training.

    A problem about training.

    Hello, thanks for your great work. However, when I run your train.py file, I have some problems as follows. Could you help me to solve this problem. Thanks!

    Traceback (most recent call last): File "train.py", line 218, in train() File "train.py", line 141, in train d_fake = generator_step(net_d, out2, net_loss, optimizer) File "/userhome/point-cloud/VRCNet/utils/train_utils.py", line 42, in generator_step total_gen_loss_batch.backward(torch.ones(torch.cuda.device_count()).cuda(), retain_graph=True, ) File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 120, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward grad_tensors = _make_grads(tensors, grad_tensors) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/init.py", line 29, in _make_grads + str(out.shape) + ".") RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).

    opened by zxy110 4
  • Pretrianed

    Pretrianed

    Thank you for your open source code!How can I use the pretrained data?I'm major in vehicle engineering but not computer vision.So this question really confuse me a lot.

    opened by hy-cn 3
  • pre-trained models

    pre-trained models

    Hi, I found that the loss did not drop during training, so I would like to ask if you can make the pre-trained model publicly available? Thank you very much!

    opened by zxy110 3
  • Training outputs in vrcnet

    Training outputs in vrcnet

    Hi, i have a question about the output dimensions during training. Is it true that fine and course ptc have dimension [2*batch_size, numberOfPoints, 3], since the decoder during training has twice the features and twice the partial cloud as input? I'm trying to implement it on another dataset and i do not understand it. Thank you in advance.

    opened by stefano-mazz 2
  • Questions about the dataset

    Questions about the dataset

    Hi, Thank you for the amazing work.just some questions about the dataset, I notice there are six groups in mvp_test_gt_%dpts.h5: 'complete_pcds', 'labels', 'normal', 'novel_complete_pcds', 'novel_labels', 'novel_normal' I want to use the 'normal' data for some other tasks, so can you tell me how did you get 'normal', Is it same as the normal of shapenet dataset? and what does 'novel' mean? what is the difference between 'novel_complete_pcds' and 'complete_pcds'? Thanks in advance.

    opened by Marigod98 2
  • I have a question about GPU capabilities.

    I have a question about GPU capabilities.

    I'm trying to train VRCNet, but it runs out of RAM when the batch size is more than 16. The GPU I am using is a GTX 1080 with 8GB. What kind of GPU are you using and what batch size are you training?

    opened by KATSU-HIGH5 2
  • Question about vrcnet result

    Question about vrcnet result

    Hi, there. I am confused about the result of the vrcnet. I trained it using the parameters in the cfgs file (2048pts, 100 epoch, cd loss), but the result (cd_p: 0.174726 cd_t: 0.105811 emd: 0.277059 f1: 0.000139 ) seems quite different from what you reported in the paper. Also, the visible result is quite weird...

    So, I was wondering what's the problem? Is the pretrained model necessarily needed? Or is the parameters in the cfgs file are not exactly what you guys use in the experiment?

    Still, thanks for sharing the code. It is definitely incredible work.

    Looking forward to your reply~

    opened by yolanehe 1
  • VRCNet test my data

    VRCNet test my data

    I want to do some tests with my point cloud data, but I don't know the structure of the .h5 file. What is the structure of the file and how was it created? Please let me know. In addition, how much point cloud data do I need to prepare if I want to train with my point cloud data?

    opened by KATSU-HIGH5 1
  • VRCNet training

    VRCNet training

    Thanks very much for share the codes. I am a little confused about the vrcnet. Why the codes in vrc.py are the same as vrcnet.py? According to the paper, three steps (PMNET trained with gt, PMNET trained with incompletion point clouds, RENET) are needed if we reproduce it. However, only one file ( vrc.py or vrcnet.py) is provided. Should I revise it? Thanks again.

    opened by lkhazyf 1
  • Questions about Poisson disk sampling

    Questions about Poisson disk sampling

    Hi, thank you very much for providing the model and dataset.

    I notice in the article that the MVP dataset sampled point clouds of different resolutions on the same 3D model. I wonder what tools or libraries you use to Poisson disk sampling? So that the number of point clouds in MVP dataset can be controlled to be the same.

    I tried to use meshlab but I cannot guarantee that the number of point clouds sampled by multiple 3D models are the same. Could you provide the code to sample the dataset?

    opened by walsvid 1
  • Questions about the model

    Questions about the model

    Sorry to bother you, I would like to ask why the batchnorm that is popular in most models is not used in your VRCnet model framework. Will it affect the effect?

    opened by Unknownnnnnnn 0
  • some questions about the model architecture

    some questions about the model architecture

    Thanks for your wonderful work! I am interested in the model architecture: two branches, optimize post and prior distributions with KL_divergence and use some other loss to optimize decoder, I try to use the architecture on some other works. but when I train the model, KL_divergence can‘t be optimized well, always meet error: NaN or Inf found in input tensor, did you ever meet the error? could you share some experience about how to optimize KL_divergence? Thank you very much and sorry for the inconvenience.

    opened by Marigod98 0
  • labels in mvp_test_input.h5

    labels in mvp_test_input.h5

    Hello! Thank you for the amazing work. I found that the labels in mvp_test_input.h5 are 8 to 15, but the point clouds in mvp_test_input.h5 are all chairs.What's the role of labels? How can I complete the point cloud only contain cars? Looking forward to your reply.

    opened by hy-cn 1
  • Training time and pre-trained models

    Training time and pre-trained models

    Hello Liang, thanks for sharing the code of your interesting work. I have some questions.

    1. I I train your model for 16384 points on RTX 3090 with a batch size of 18. I trained it with cd loss instead of emd loss. It took around 1 hour to train one epoch. How long does it take for you to train the model?
    2. If possible, could you provide the pre-trained model with 16384 points ? Thanks in advance.
    opened by ljj621 1
  • Train VRC on single categories produces bad results

    Train VRC on single categories produces bad results

    Hi! I have retrained your model on another dataset (derived from ShapeNet) on single categories, (i.e. for each class I train a separate model). I found that VRC has the same performances of ECG for some categories and perform even worser on other categories. There's no an overall improvement as is demonstrated by training on all categories. What do you think could be the reason behind this behaviour. I trained with point cloud resolution set to 2048 and the same settings specified in your config file.

    opened by Emanuele97x 3
  • Using this code on my own dataset

    Using this code on my own dataset

    when i finished the training process, and came to the test, the error occured:


    File "test.py", line 127, in test() File "test.py", line 41, in test net.module.load_state_dict(torch.load(args.load_model)['net_state_dict'], strict = False) #got the net parameters of checkpoint File "/home/**/anaconda3/envs/VRC/lib/python3.7/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Model: size mismatch for decoder.encoder.sam_res1.conv1.weight: copying a param with shape torch.Size([64, 4, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 3, 1, 1]). size mismatch for decoder.encoder.sam_res1.conv_res.weight: copying a param with shape torch.Size([64, 4, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 3, 1, 1]).


    Maybe anyone could help me?

    opened by zhou-SHU164 0
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 5, 2022
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 197 Dec 9, 2022
Implementation of "Bidirectional Projection Network for Cross Dimension Scene Understanding" CVPR 2021 (Oral)

Bidirectional Projection Network for Cross Dimension Scene Understanding CVPR 2021 (Oral) [ Project Webpage ] [ arXiv ] [ Video ] Existing segmentatio

Hu Wenbo 135 Dec 26, 2022
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

F8Net Fixed-Point 8-bit Only Multiplication for Network Quantization (ICLR 2022 Oral) OpenReview | arXiv | PDF | Model Zoo | BibTex PyTorch implementa

Snap Research 76 Dec 13, 2022
[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

ShapeInversion Paper Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D

null 100 Dec 22, 2022
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW ?? ] The submission on Codalab starts! 2021-07-1

PL 93 Dec 21, 2022
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

null 22 Jan 4, 2023
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

Urban Robotics Lab. @ KAIST 37 Dec 22, 2022
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Hong Wang 48 Nov 23, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

null 40 Dec 30, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
A PyTorch implementation of the Relational Graph Convolutional Network (RGCN).

Torch-RGCN Torch-RGCN is a PyTorch implementation of the RGCN, originally proposed by Schlichtkrull et al. in Modeling Relational Data with Graph Conv

Thiviyan Singam 66 Nov 30, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022