PyTorch implementation for paper Neural Marching Cubes.

Related tags

Deep Learning NMC
Overview

NMC

PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang.

Paper | Supplementary Material (to be updated)

Citation

If you find our work useful in your research, please consider citing:

@article{chen2021nmc,
  title={Neural Marching Cubes},
  author={Zhiqin Chen and Hao Zhang},
  journal={arXiv preprint arXiv:2106.11272},
  year={2021}
}

Notice

We have implemented Neural Dual Contouring (NDC). NDC is based on Dual Contouring and thus much easier to implement than NMC. It produces less triangles and vertices (1/8 of NMC, 1/4 of NMC-lite, ≈MC33), with better triangle quality. It runs faster than NMC because it has significantly less values to predict for each cube (1 bool 3 float for NDC, v.s. 5 bool 51 float for NMC), therefore the network size could be significantly reduced. Yet, it cannot reconstruct some cube cases, and may introduce non-manifold edges.

Requirements

  • Python 3 with numpy, h5py, scipy and Cython
  • PyTorch 1.8 (other versions may also work)

Build Cython module:

python setup.py build_ext --inplace

Datasets and pre-trained weights

For data preparation, please see data_preprocessing.

We provide the ready-to-use datasets here.

Backup links:

We also provide the pre-trained network weights.

Backup links:

Note that the weights are divided into six folders:

Folder Method Input
1_NMC_sdf_unit_scale NMC SDF grid, each grid cell must have unit length
2_NMC_lite_sdf_unit_scale NMC-lite SDF grid, each grid cell must have unit length
3_NMC_voxel NMC Voxel grid, 1=occupied, 0=otherwise
4_NMC_lite_voxel NMC-lite Voxel grid, 1=occupied, 0=otherwise
5_NMC_sdf_scale_0.001-2 NMC SDF grid, each grid cell could have length from 0.001 to 2.0
6_NMC_lite_sdf_scale_0.001-2 NMC-lite SDF grid, each grid cell could have length from 0.001 to 2.0
This GitHub repo NMC = 5_NMC_sdf_scale_0.001-2

Training and Testing

Before training, please replace LUT_tess.npz (the Look-Up Table for cube tessellations) in the main directory with the corresponding version of your training target (either NMC or NMC-lite). Both versions of LUT_tess.npz can be found at tessellation.

To train/test NMC with SDF input:

python main.py --train_bool --epoch 400 --data_dir groundtruth/gt_NMC --input_type sdf
python main.py --train_float --epoch 400 --data_dir groundtruth/gt_NMC --input_type sdf
python main.py --test_bool_float --data_dir groundtruth/gt_NMC --input_type sdf

To train/test NMC-lite with SDF input:

python main.py --train_bool --epoch 400 --data_dir groundtruth/gt_simplified --input_type sdf
python main.py --train_float --epoch 400 --data_dir groundtruth/gt_simplified --input_type sdf
python main.py --test_bool_float --data_dir groundtruth/gt_simplified --input_type sdf

To train/test NMC with voxel input:

python main.py --train_bool --epoch 200 --data_dir groundtruth/gt_NMC --input_type voxel
python main.py --train_float --epoch 100 --data_dir groundtruth/gt_NMC --input_type voxel
python main.py --test_bool_float --data_dir groundtruth/gt_NMC --input_type voxel

To train/test NMC-lite with voxel input:

python main.py --train_bool --epoch 200 --data_dir groundtruth/gt_simplified --input_type voxel
python main.py --train_float --epoch 100 --data_dir groundtruth/gt_simplified --input_type voxel
python main.py --test_bool_float --data_dir groundtruth/gt_simplified --input_type voxel

To evaluate Chamfer Distance, Normal Consistency, F-score, Edge Chamfer Distance, Edge F-score, you need to have the ground truth normalized obj files ready in a folder objs. See data_preprocessing for how to prepare the obj files. Then you can run:

python eval_cd_nc_f1_ecd_ef1.py

To count the number of triangles and vertices, run:

python eval_v_t_count.py

If you want to test on your own dataset, please refer to data_preprocessing for how to convert obj files into SDF grids and voxel grids. If your data are not meshes (say your data are already voxel grids), you can modify the code in utils.py to read your own data format. Check function read_data_input_only in utils.py for an example.

You might also like...
Pytorch implementation of DeepMind's differentiable neural  computer paper.
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Pytorch implementation of Cut-Thumbnail in the paper Cut-Thumbnail:A Novel Data Augmentation for Convolutional Neural Network.

Cut-Thumbnail (Accepted at ACM MULTIMEDIA 2021) Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Tao Zhou, Ming Liu This is the officia

Pytorch implementation of paper:
Pytorch implementation of paper: "NeurMiPs: Neural Mixture of Planar Experts for View Synthesis"

NeurMips: Neural Mixture of Planar Experts for View Synthesis This is the official repo for PyTorch implementation of paper "NeurMips: Neural Mixture

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom
Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom Sample on-line plotting while training(avg loss)/testing(writ

A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Pytorch code for paper
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

Comments
  • The generalizability

    The generalizability

    Hello,ZHIQIN CHEN. Thank you for such a wonderful job! I have recently been working on a surface reconstruction,but the use of mc does not work well.

    Q : Can I use your network and weights directly to do surface reconstruction on my data? The only data I have available are the Signed Distance Field.

    Thanks!

    opened by Zhuyouxx 4
  • cutils module

    cutils module

    Traceback (most recent call last): File "main.py", line 494, in import cutils ModuleNotFoundError: No module named 'cutils'

    Can help? Where I can get that module?

    opened by StarMi0 3
  • Noisy inputs training

    Noisy inputs training

    Hi, zhiqin

    I am trying to use the noisy inputs to train the NMC model from scratch to get similar results from paper but found the loss decrease far too slow compared to clean inputs. for bool part: [0/400] time: 1366 loss: 0.25179240 loss_bool: 0.91642386 loss_float: 0.00000000 [1/400] time: 2282 loss: 0.25031856 loss_bool: 0.91567051 loss_float: 0.00000000 [2/400] time: 3002 loss: 0.25059867 loss_bool: 0.91625684 loss_float: 0.00000000 [3/400] time: 3594 loss: 0.25166979 loss_bool: 0.91583282 loss_float: 0.00000000 [4/400] time: 4078 loss: 0.25226465 loss_bool: 0.91645384 loss_float: 0.00000000 ... [275/400] time: 45009 loss: 0.24075049 loss_bool: 0.91662174 loss_float: 0.00000000 [276/400] time: 45091 loss: 0.24094345 loss_bool: 0.91680372 loss_float: 0.00000000 [277/400] time: 45175 loss: 0.23898503 loss_bool: 0.91793585 loss_float: 0.00000000 [278/400] time: 45257 loss: 0.23988940 loss_bool: 0.91663897 loss_float: 0.00000000 [279/400] time: 45340 loss: 0.24025825 loss_bool: 0.91807079 loss_float: 0.00000000

    Does it also occur to you when you train it? How many epochs did you train for both the bool and float part? and would you like to share noisy training weights with me to see more results from it? Great thanks!

    opened by Howardyangyixuan 2
  • Confusion about batchsize

    Confusion about batchsize

    Hello,ZHIQIN CHEN Thanks for your excellent work! In the code, I noticed there is a comment for the batchsize "#batch_size must be 1", and feel confuse about this requirement. Is there anything special for this?

    opened by Harvey-Mei 1
Owner
Zhiqin Chen
Video game addict.
Zhiqin Chen
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

null 50 Oct 19, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

null 148 Dec 30, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022