Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

Overview

MetaSDF: Meta-learning Signed Distance Functions

Project Page | Paper | Data

Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely
Gordon Wetzstein
*denotes equal contribution

This is the official implementation of the paper "MetaSDF: Meta-Learning Signed Distance Functions".

In this paper, we show how we may effectively learn a prior over implicit neural representations using gradient-based meta-learning.

While in the paper, we show this for the special case of SDFs with the ReLU nonlinearity, this works formidably well with other types of neural implicit representations - such as our work "SIREN"!

We show you how in our Colab notebook:

Explore MetaSDF in Colab

DeepSDF

A large part of this codebase (directory "3D") is based on the code from the terrific paper "DeepSDF" - check them out!

Get started

If you only want to experiment with MetaSDF, we have written a colab that doesn't require installing anything, and goes through a few other interesting properties of MetaSDF as well - for instance, it turns out you can train SIREN to fit any image in only just three gradient descent steps!

If you want to reproduce all the experiments from the paper, you can then set up a conda environment with all dependencies like so:

conda env create -f environment.yml
conda activate metasdf

3D Experiments

Dataset Preprocessing

Before training a model, you'll first need to preprocess the training meshes. Please follow the preprocessing steps used by DeepSDF if using ShapeNet.

Define an Experiment

Next, you'll need to define the model and hyperparameters for your experiment. Examples are given in 3D/curriculums.py, but feel free to make modifications. Although not present in the original paper, we've included some curriculums with positional encodings and smaller models. These generally perform on par with the original models but require much less memory.

Train a Model

After you've preprocessed your data and have defined your curriculum, you're ready to start training! Navigate to the 3D/scripts directory and run

python run_train.py <curriculum name>.

If training is interupted, pass the flag --load flag to continue training from where you left off.

You should begin seeing printouts of loss, with a summary at every epoch. Checkpoints and Tensorboard summaries are saved to the 'output_dir' directory, as defined in your curriculum. We log raw loss, which is either the composite loss or L1 loss, depending on your experiment definition, as well as a 'Misclassified Percentage'. The 'Misclassified Percentage' is the percentage of samples that the model incorrectly classified as inside or outside the mesh.

Reconstructing Meshes

After training a model, recontruct some meshes using

python run_reconstruct.py <curriculum name> --checkpoint <checkpoint file name>.

The script will use the 'test_split' as defined in the curriculum.

Evaluating Reconstructions

After reconstructing meshes, calculate Chamfer Distances between reconstructions and ground-truth meshes by running

python run_eval.py <reconstruction dir>.

Torchmeta

We're using the excellent torchmeta to implement hypernetworks.

Citation

If you find our work useful in your research, please cite:

       @inproceedings{sitzmann2019metasdf,
            author = {Sitzmann, Vincent
                      and Chan, Eric R.
                      and Tucker, Richard
                      and Snavely, Noah
                      and Wetzstein, Gordon},
            title = {MetaSDF: Meta-Learning Signed
                     Distance Functions},
            booktitle = {Proc. NeurIPS},
            year={2020}
       }

Contact

If you have any questions, please feel free to email the authors.

You might also like...
Official implementation of our paper
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

An official implementation of
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Official code implementation for
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

StyleGAN2 - Official TensorFlow Implementation
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

 Old Photo Restoration (Official PyTorch Implementation)
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Official implementation of
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

Official PyTorch implementation of Spatial Dependency Networks.
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Comments
  • Details of 3D Dataset

    Details of 3D Dataset

    Dear @vsitzmann ,

    Thank you for making the code available. Really cool piece of work! But I did not find more details about the dataset that is used 3d shape training and evaluation apart from its the ShapeNet dataset. I am curious to know how many different shapes were used for training and how many meta training datasets were used?

    Thank you!

    opened by anilesec 0
Owner
Vincent Sitzmann
I'm researching 3D-structured neural scene representations. Ph.D. student in Stanford's Computational Imaging Group.
Vincent Sitzmann
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022