[NeurIPS 2021] Code for Learning Signal-Agnostic Manifolds of Neural Fields

Related tags

Text Data & NLP gem
Overview

Learning Signal-Agnostic Manifolds of Neural Fields

This is the uncleaned code for the paper Learning Signal-Agnostic Manifolds of Neural Fields. The cleaned code will be cleaned shortly.

Downloading Data

Please utilize the following link to download the underlying models and data used in the paper and extract it in the root directory. Please download the 3D shape dataset from here.

Demo

The underying audiovisual manifold illustrated in the paper may be constructed by utilizing the following command

python experiment_scripts/audiovisual_manifold_interpolate.py --experiment_name=audiovis_demo --checkpoint_path log_root/audiovis_demo/checkpoints/model_70000.pth

Training Different Signal Manifolds

Please utilize the following command to train an image manifold

python experiment_scripts/train_autodecoder_multiscale.py --experiment_name=celeba 

Please utilize the following command to train a 3D shape manifold

python experiment_scripts/train_imnet_autodecoder.py --experiment_name=imnet 

Please utilize the following command to train an audio manifold

python experiment_scripts/train_audio_autodecoder.py --experiment_name=audio 

Please utilize the following command to train an audiovisual manifold

python experiment_scripts/train_audiovisual_autodecoder.py --experiment_name=audiovisual

Citing our Paper

If you find our code useful for your research, please consider citing

@inproceedings{du2021gem,
  title={Learing Signal-Agnostic Manifolds of Neural Fields},
  author={Du, Yilun and Collins, M. Katherine and and Tenenbaum, B. Joshua
  and Sitzmann, Vincent},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}
You might also like...
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

NeurIPS'21: Probabilistic Margins for Instance Reweighting in Adversarial Training (Pytorch implementation).

source code for NeurIPS21 paper robabilistic Margins for Instance Reweighting in Adversarial Training

Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.

TRICE: a task-agnostic transferring framework for multi-source sequence generation This is the source code of our work Transfer Learning for Sequence

Code for EMNLP 2021 main conference paper
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.

textgenrnn Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code, or quickly tr

code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling This repository contains PyTorch evaluation code, training code and pretrain

Comments
  • pretrained models

    pretrained models

    Hi @yilundu,

    Congrats for the very interesting work and thanks for sharing the code. Would it be possible to get the pretrained models for the 3D shape manifold on ShapeNet?

    Many thanks in advance, Luca

    opened by lykius 7
  • Potential error in the code?

    Potential error in the code?

    Hi, thanks for the great work and open source the code! However, when I look into the code, there seems to be some errors (or maybe I didn't understand it correctly). This part seems to duplicate with this part? I didn't see any reason for running the model twice here, so maybe it's an error?

    I am also a bit confused about the code. I'm looking at the experiments where you train an IMNet for 3D shape (train_imnet_autodecoder.py). I follow the arguments you pass to the model and I think the model forward pass will enter this if-branch right? Under this if-branch, you are basically preparing things like linear combination weight out_dict['weights'] and distance between data and latent z latents_dist etc. for the LLE and Iso loss calculation. Is that correct?

    One last thing is that, your model for 3D shape uses SIREN instead of fourier position encoding (the one from NeRF), while your model for image uses fourier instead of SIREN. Is it because the difference between 2D image space and 3D space? Did you find large difference in using these two modules e.g. fourier encoding didn't work for 3D shape experiment?

    opened by Wuziyi616 3
  • Please fix the path in dataio.py

    Please fix the path in dataio.py

    Hi Yilun,

    Thanks for sharing the implementation of this awesome work! I found a path issue in dataio.py. Line: 578-581 if split == "train": self.im_paths = sorted(glob('/data/vision/billf/scratch/yilundu/dataset/celebahq/celebahq_train/data128x128_small/*.jpg')) else: self.im_paths = sorted(glob('/data/vision/billf/scratch/yilundu/dataset/celebahq/celebahq_test/*.jpg'))

    Could you please help to fix it and share the dataset link so that we are based on the same dataset?

    Thanks for your help.

    opened by GH-HOME 1
  • Error in train_audio_autodecoder.py

    Error in train_audio_autodecoder.py

    Hey, while running the code to train audio manifolds, facing the following error in line 81 of train_audio_autodecoder.py-

    train_generalization_dataset = dataio.AudioGeneralizationWrapper(train_audio_dataset, sampling=4096, do_pad=True) TypeError: __init__() got an unexpected keyword argument 'sampling'

    The AudioGeneralizationWrapper given here, doesn't have the sampling argument.

    opened by Aaryan369 3
Owner
null
Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

null 1.9k Jan 8, 2023
Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Facebook Research 3.2k Jan 4, 2023
Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

vanint 101 Dec 30, 2022
xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building blocks.

Description xFormers is a modular and field agnostic library to flexibly generate transformer architectures by interoperable and optimized building bl

Facebook Research 2.3k Jan 8, 2023
Code to reprudece NeurIPS paper: Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Accelerated Sparse Neural Training: A Provable and Efficient Method to FindN:M Transposable Masks Recently, researchers proposed pruning deep neural n

itay hubara 4 Feb 23, 2022
Code for our paper "Mask-Align: Self-Supervised Neural Word Alignment" in ACL 2021

Mask-Align: Self-Supervised Neural Word Alignment This is the implementation of our work Mask-Align: Self-Supervised Neural Word Alignment. @inproceed

THUNLP-MT 46 Dec 15, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Justin Terry 32 Nov 9, 2021
Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper: Sequence-to-Sequence Learning with Latent Neural Grammars

Yoon Kim 43 Dec 23, 2022