Learning Continuous Signed Distance Functions for Shape Representation

Overview

DeepSDF

This is an implementation of the CVPR '19 paper "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation" by Park et al. See the paper here.

DeepSDF Video

Citing DeepSDF

If you use DeepSDF in your research, please cite the paper:

@InProceedings{Park_2019_CVPR,
author = {Park, Jeong Joon and Florence, Peter and Straub, Julian and Newcombe, Richard and Lovegrove, Steven},
title = {DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

File Organization

The various Python scripts assume a shared organizational structure such that the output from one script can easily be used as input to another. This is true for both preprocessed data as well as experiments which make use of the datasets.

Data Layout

The DeepSDF code allows for pre-processing of meshes from multiple datasets and stores them in a unified data source. It also allows for separation of meshes according to class at the dataset level. The structure is as follows:

<data_source_name>/
    .datasources.json
    SdfSamples/
        <dataset_name>/
            <class_name>/
                <instance_name>.npz
    SurfaceSamples/
        <dataset_name>/
            <class_name>/
                <instance_name>.ply

Subsets of the unified data source can be reference using split files, which are stored in a simple JSON format. For examples, see examples/splits/.

The file datasources.json stores a mapping from named datasets to paths indicating where the data came from. This file is referenced again during evaluation to compare against ground truth meshes (see below), so if this data is moved this file will need to be updated accordingly.

Experiment Layout

Each DeepSDF experiment is organized in an "experiment directory", which collects all of the data relevant to a particular experiment. The structure is as follows:

<experiment_name>/
    specs.json
    Logs.pth
    LatentCodes/
        <Epoch>.pth
    ModelParameters/
        <Epoch>.pth
    OptimizerParameters/
        <Epoch>.pth
    Reconstructions/
        <Epoch>/
            Codes/
                <MeshId>.pth
            Meshes/
                <MeshId>.pth
    Evaluations/
        Chamfer/
            <Epoch>.json
        EarthMoversDistance/
            <Epoch>.json

The only file that is required to begin an experiment is 'specs.json', which sets the parameters, network architecture, and data to be used for the experiment.

How to Use DeepSDF

Pre-processing the Data

In order to use mesh data for training a DeepSDF model, the mesh will need to be pre-processed. This can be done with the preprocess_data.py executable. The preprocessing code is in C++ and has the following requirements:

With these dependencies, the build process follows the standard CMake procedure:

mkdir build
cd build
cmake ..
make -j

Once this is done there should be two executables in the DeepSDF/bin directory, one for surface sampling and one for SDF sampling. With the binaries, the dataset can be preprocessed using preprocess_data.py.

Preprocessing with Headless Rendering

The preprocessing script requires an OpenGL context, and to acquire one it will open a (small) window for each shape using Pangolin. If Pangolin has been compiled with EGL support, you can use the "headless" rendering mode to avoid the windows stealing focus. Pangolin's headless mode can be enabled by setting the PANGOLIN_WINDOW_URI environment variable as follows:

export PANGOLIN_WINDOW_URI=headless://

Training a Model

Once data has been preprocessed, models can be trained using:

python train_deep_sdf.py -e <experiment_directory>

Parameters of training are stored in a "specification file" in the experiment directory, which (1) avoids proliferation of command line arguments and (2) allows for easy reproducibility. This specification file includes a reference to the data directory and a split file specifying which subset of the data to use for training.

Visualizing Progress

All intermediate results from training are stored in the experiment directory. To visualize the progress of a model during training, run:

python plot_log.py -e <experiment_directory>

By default, this will plot the loss but other values can be shown using the --type flag.

Continuing from a Saved Optimization State

If training is interrupted, pass the --continue flag along with a epoch index to train_deep_sdf.py to continue from the saved state at that epoch. Note that the saved state needs to be present --- to check which checkpoints are available for a given experiment, check the `ModelParameters', 'OptimizerParameters', and 'LatentCodes' directories (all three are needed).

Reconstructing Meshes

To use a trained model to reconstruct explicit mesh representations of shapes from the test set, run:

python reconstruct.py -e <experiment_directory>

This will use the latest model parameters to reconstruct all the meshes in the split. To specify a particular checkpoint to use for reconstruction, use the --checkpoint flag followed by the epoch number. Generally, test SDF sampling strategy and regularization could affect the quality of the test reconstructions. For example, sampling aggressively near the surface could provide accurate surface details but might leave under-sampled space unconstrained, and using high L2 regularization coefficient could result in perceptually better but quantitatively worse test reconstructions.

Shape Completion

The current release does not include code for shape completion. Please check back later!

Evaluating Reconstructions

Before evaluating a DeepSDF model, a second mesh preprocessing step is required to produce a set of points sampled from the surface of the test meshes. This can be done as with the sdf samples, but passing the --surface flag to the pre-processing script. Once this is done, evaluations are done using:

python evaluate.py -e <experiment_directory> -d <data_directory> --split <split_filename>
Note on Table 3 from the CVPR '19 Paper

Given the stochastic nature of shape reconstruction (shapes are reconstructed via gradient descent with a random initialization), reconstruction accuracy will vary across multiple reruns of the same shape. The metrics listed in Table 3 for the "chair" and "plane" are the result of performing two reconstructions of each shape and keeping the one with the lowest chamfer distance. The code as released does not support this evaluation and thus the reproduced results will likely differ from those produced in the paper. For example, our test run with the provided code produced Chamfer distance (multiplied by 103) mean and median of 0.157 and 0.062 respectively for the "chair" class and 0.101 and 0.044 for the "plane" class (compared to 0.204, 0.072 for chairs and 0.143, 0.036 for planes reported in the paper).

Examples

Here's a list of commands for a typical use case of training and evaluating a DeepSDF model using the "sofa" class of the ShapeNet version 2 dataset.

# navigate to the DeepSdf root directory
cd [...]/DeepSdf

# create a home for the data
mkdir data

# pre-process the sofas training set (SDF samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

# train the model
python train_deep_sdf.py -e examples/sofas

# pre-process the sofa test set (SDF samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_test.json --test --skip

# pre-process the sofa test set (surface samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_test.json --surface --skip

# reconstruct meshes from the sofa test split (after 2000 epochs)
python reconstruct.py -e examples/sofas -c 2000 --split examples/splits/sv2_sofas_test.json -d data --skip

# evaluate the reconstructions
python evaluate.py -e examples/sofas -c 2000 -d data -s examples/splits/sv2_sofas_test.json 

Team

Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove

Acknowledgements

We want to acknowledge the help of Tanner Schmidt with releasing the code.

License

DeepSDF is relased under the MIT License. See the LICENSE file for more details.

Comments
  • Bugfix: Fixed batch splitting.

    Bugfix: Fixed batch splitting.

    The inner for loop (_subbatch) had no effect -- it always used the same subbatch. This can be verified by printing sdf_data or loss in every iteration of _subbatch. The outer for loop generates a subbatch (sdf_data) and this subbatch is then re-used batch_split many times in the inner for loop.

    I'm new to Pytorch, so maybe I'm using it wrong.

    CLA Signed 
    opened by edgar-tr 11
  • OpenGL Error: XX (500) and   what():  Interlace not yet supported

    OpenGL Error: XX (500) and what(): Interlace not yet supported

    When running preprocess_data.py, two error occured: DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df7fc0b3b796714fd00dd29272c1070b/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df7fc0b3b796714fd00dd29272c1070b.ply terminate called after throwing an instance of 'std::runtime_error' what(): Interlace not yet supported DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df8311076b838c7ea5f9d52c12457194/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df8311076b838c7ea5f9d52c12457194.ply OpenGL Error: XX (500) In: /usr/local/include/pangolin/gl/gl.hpp, line 203 DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df8374d8f3563be8f1783a44a88d6274/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df8374d8f3563be8f1783a44a88d6274.ply

    opened by lilizaimumu 10
  • Preprocessing error

    Preprocessing error

    Preprocess part gives me quite a headache. I have the following error:

    terminate called after throwing an instance of 'std::runtime_error' what(): Pangolin X11: Unable to retrieve framebuffer options DeepSdf - INFO - ShapeNetCore.v2/04256520/45d3384ab8d5b6295637fc0f4b98e88b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/45d3384ab8d5b6295637fc0f4b98e88b.npz Unable to read texture 'texture_0' Unable to read texture 'texture_2' terminate called after throwing an instance of 'std::runtime_error' what(): Pangolin X11: Unable to retrieve framebuffer options Unable to read texture 'texture_2' Unable to read texture 'texture_4' Unable to read texture 'texture_5' terminate called after throwing an instance of 'std::runtime_error' what(): Pangolin X11: Unable to retrieve framebuffer options DeepSdf - INFO - ShapeNetCore.v2/04256520/45d96e52f535907d40c4baf1afd9784/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/45d96e52f535907d40c4baf1afd9784.npz

    I run this on a remote GPU server. Previously, I resolved X11 can't open display X by enabling X11 forwarding on client-side. Now it gives me this kind of message and still generates 0 output. I have no idea how to fix this.

    • Ubuntu 16.04.5 LTS
    • Pangolin cloned and built with the master branch.
    opened by WordBearerYI 8
  • Questions regarding to the

    Questions regarding to the "Shape Completion" experiments

    Hello @jjparkcv and @tschmidt23, thanks for sharing this great work. I've finished the model training on "chairs" class and have a few questions about the shape completion experiments in the paper:

    1. Are the models in the shape completion experiments trained separately using only partial(single-view) point cloud input? Or I can just reuse the "complete sampling" version of training data(as preprocessing code published in this repo).
    2. Do you also use sdf_gt during inference for shape completion(even for noisy depth input)? Is it possible to use zeros as sdf_gt for point cloud input sampled only from the object surface?

    For the second question I experimented a little bit, the result is not quite as expected. This is the input point cloud: image and this is the reconstructed mesh: image image

    If this is possible, any ideas on what I did wrong?

    Thanks a lot!

    opened by JiamingSuen 8
  • Account for details

    Account for details

    Hi , Thank you for sharing your great work. I did some quick experiments with ShapeNetCoreV1 but I cannot get to reconstruct "high frequency details" from the learned latent vector. As shown in the image GT vs. reconstruction. Are there any parameters I can change to get a more detailed reconstruction?

    image

    opened by nicolasugrinovic 8
  • mpark/variant Error when running make

    mpark/variant Error when running make

    All the requirement are already installed, and running cmake .. succeed, however when runningmake -j I get the following error:

    
    DeepSDF/src/SampleVisibleMeshSurface.cpp:11:
    /usr/local/include/pangolin/compat/variant.h:10:13: fatal error: mpark/variant.hpp: No such file or directory
     #   include <mpark/variant.hpp>
                 ^~~~~~~~~~~~~~~~~~~
    compilation terminated.
    CMakeFiles/SampleVisibleMeshSurface.dir/build.make:62: recipe for target 'CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o' failed
    make[2]: *** [CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o] Error 1
    make[2]: *** Waiting for unfinished jobs.
    
    opened by HM102 7
  • 【Error】preprocess_data.py

    【Error】preprocess_data.py

    When I execute "preprocess_data.py", I found following error. python3 preprocess_data.py --data_dir data -s ../Dataset/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

    ../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/models/model_normalized.obj DeepSdf - INFO - ../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/152161d238fbc55d41cf86c757faf4f9.npz terminate called after throwing an instance of 'std::runtime_error' what(): Unable to load OBJ file '../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/models/model_normalized.obj'. Error: 'Cannot open file [../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/models/model_normalized.obj]

    opened by JUN-TSUZUKI 6
  • Visualize pre-processed SDF values as mesh

    Visualize pre-processed SDF values as mesh

    Hi thank you for your code and support with the issues. Is there a way to visualize the mesh that results from the pre-processing stage? The output of preprocess_data.py is only SDF values. It there a way to get a mesh from these "GT" values?

    Thanks

    opened by nicolasugrinovic 4
  • Supplementary materials missing

    Supplementary materials missing

    Hi Authors,

    I'm reading the DeepSDF paper and I found a lot of details are presented in the supplementary materials. However when I tried to download the supplementary, I found that the link is broken:

    http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Park_DeepSDF_Learning_Continuous_CVPR_2019_supplemental.pdf

    Could you upload the supplementary online as well?

    Thanks so much for your help!

    opened by yxw9636 4
  • Improving reconstruciton quality

    Improving reconstruciton quality

    Hi there, I had a few questions about using this repo. I'm trying to overfit the network to a single mesh, training and testing on the same .npz file, but I'm getting poor reconstruction quality. The network is learning something, but only very low frequency information. I'm not sure if the issue is my input data, training strategy, or reconstruction.

    Are there any tips for fitting this network on a single mesh?

    Things I've done:

    • Tried various preprocessing strategies, including higher and lower surface variance, more points, etc, with little to no improvement.
    • I'm basing my specs.json file on the example folders, changing to the appropriate splits files and the ScenesPerBatch variable to 1.
    • I've tried changing ClampingDistance, CodeRegularizationLambda, CodeLength, and various LearningRateSchedule params, etc, with slight improvements here and there, but still poor reconstruction.
    • Tried increasing epochs up to 50 000
    • Tried changing various reconstruction params (reconstruction.py lines 254--262), little help

    Questions:

    • When using the plot_log.py script, what magnitude of loss typically indicates a decent fit? My losses typically plateau somewhere < 0.005, sometimes ~= 0.0025, but not sure if this is "good enough". Beyond this point, losses don't seem to significantly decrease with longer training.
    • During reconstruction, the 0-contour is sometimes outside the range of the data (especially as I train longer), but not usually during the first few hundred epochs

    I've attached a plot of loss, and screenshot of the training data (sampled using standard params) and the zero surface (at epoch 500, 1000, 2000). Note, learning rates are an order of magnitude smaller than examples, but it gets to the same place with standard learning rates.

    e500 e1000 e2000

    plot_loss

    opened by demacdo 2
  • Question about the number of categories related to the network.

    Question about the number of categories related to the network.

    Thanks for sharing good work!

    I have a simple question about your work. Q1. Dose DeepSDF covers all the categories (ex. chair, plane, table, lamp, sofa) of one network? Or Does It requires one class per one network for training and inference??

    opened by taeyeop-lee 2
  • SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage'

    SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage'

    /usr/bin/ld: CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage' /usr/bin/ld: /lib/x86_64-linux-gnu/libOpenGL.so.0: error adding symbols: DSO missing from command line Hello!I displayed this error when compiling, but I did not find a solution. Can you help me?

    opened by shuzhangshu 0
  • SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage'

    SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage'

    /usr/bin/ld: CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o: undefined reference to symbol 'glGetTexImage' /usr/bin/ld: /lib/x86_64-linux-gnu/libOpenGL.so.0: error adding symbols: DSO missing from command line Hello!I displayed this error when compiling, but I did not find a solution. Can you help me

    opened by shuzhangshu 0
  • Segmentation fault when trying to obtain surfaces (preprocessing)

    Segmentation fault when trying to obtain surfaces (preprocessing)

    Hi, I am trying to preprocess personal data in order to reproduce the results of the paper. I installed the dependencies (Pangolin in v0.6, with -DCMAKE_CXX_STANDARD=17 at compilation time, line 97 of src/ShaderProgram.cpp commented and both $MESA_GL_VERSION_OVERRIDE=3.3, $PANGOLIN_WINDOW_URI=headless://): obtention of SDF values works well in the preprocessing script, but when the --surface flag is set, we do not get any result, and it seems that the compiled binary yields a segfault when used. Is there a step missing when compiling, or a known workaround? Thanks for your help!

    opened by LogarithmeNeper 0
  • Build Issue

    Build Issue

    I am following the build process, but facing an error while building the DeepSDF functionalities. A possible error is due to Pangolin. I tried building and installing various versions of it (Pangolin), but none seems to work. I am trying the building process on google colab. I am sharing the link to the gist. Can someone please look into it, and help solve the error?

    Code : Colab

    opened by around-star 0
  • Is it possible to provided the processed data directly?

    Is it possible to provided the processed data directly?

    I'm trying to generate data following the data processing steps. However, I found that the environment is hard to configure, and I can't get any output after compiling the executable files on my server. So I'm wondering is it possible to provided the processed training and test data directly?

    opened by bluestyle97 6
Owner
Meta Research
Meta Research
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

null 149 Dec 15, 2022
Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

MetaSDF: Meta-learning Signed Distance Functions Project Page | Paper | Data Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely Gordon W

Vincent Sitzmann 100 Jan 1, 2023
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

null 1 Nov 1, 2021
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
Learning Continuous Image Representation with Local Implicit Image Function

LIIF This repository contains the official implementation for LIIF introduced in the following paper: Learning Continuous Image Representation with Lo

Yinbo Chen 1k Dec 25, 2022
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing

CSGStumpNet The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing Paper | Project page

Daxuan 39 Dec 26, 2022
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 7, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

null 41 Jan 6, 2023
MiraiML: asynchronous, autonomous and continuous Machine Learning in Python

MiraiML Mirai: future in japanese. MiraiML is an asynchronous engine for continuous & autonomous machine learning, built for real-time usage. Usage In

Arthur Paulino 25 Jul 27, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 6, 2023
[ICML 2020] Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

PG-MORL This repository contains the implementation for the paper Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Contro

MIT Graphics Group 65 Jan 7, 2023
On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Facebook Research 46 Dec 15, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Softlearning Softlearning is a deep reinforcement learning toolbox for training maximum entropy policies in continuous domains. The implementation is

Robotic AI & Learning Lab Berkeley 997 Dec 30, 2022
Distributional Sliced-Wasserstein distance code

Distributional Sliced Wasserstein distance This is a pytorch implementation of the paper "Distributional Sliced-Wasserstein and Applications to Genera

VinAI Research 39 Jan 1, 2023