PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

Overview

PyNIF3D

License: MIT Read the Docs

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

As of August 2021, the following implementations are supported:

Installation

To get started with PyNIF3D, you can use pip to install a copy of this repository on your local machine or build the provided Dockerfile.

Local Installation

pip install --user "https://github.com/pfnet/pynif3d.git"

The following packages need to be installed in order to ensure the proper functioning of all the PyNIF3D features:

  • torch_scatter>=1.3.0
  • torchsearchsorted>=1.0

A script has been provided to take care of the installation steps for you. Please download it to a directory of choice and run:

bash post_install.bash

Docker Build

Enabling CUDA Support

Please make sure the following dependencies are installed in order to build the Docker image with CUDA support:

  • nvidia-docker
  • nvidia-container-runtime

Then register the nvidia runtime by adding the following to /etc/docker/daemon.json:

{
    "runtimes": {
        "nvidia": {
            [...]
        }
    },
    "default-runtime": "nvidia"
}

Restart the Docker daemon:

sudo systemctl restart docker

You should now be able to build a Docker image with CUDA support.

Building Dockerfile

git clone https://github.com/pfnet/pynif3d.git
cd pynif3d && nvidia-docker build -t pynif3d .

Running the Container

nvidia-docker run -it pynif3d bash

Tutorials

Get started with PyNIF3D using the examples provided below:

NeRF Tutorial CON Tutorial IDR Tutorial

In addition to the tutorials, pretrained models are also provided and ready to be used. Please consult this page for more information.

License

PyNIF3D is released under the MIT license. Please refer to this document for more information.

Contributing

We welcome any new contributions to PyNIF3D. Please make sure to read the contributing guidelines before submitting a pull request.

Documentation

Learn more about PyNIF3D by reading the API documentation.

Comments
  • [Question] The default train-run of CON caused Out-Of-Memory

    [Question] The default train-run of CON caused Out-Of-Memory

    (Not urgent question.)

    I run the training script in the example of CON with the default args (= grid mode) using ShapeNet (downloaded by occupancy_networks repo's script) using 32GB GPU. However, it caused OOM. When setting -bs 24, it works (memory usage 30622MiB / 32510MiB). Is this an intended behavior?

    $ python -u examples/con/train.py -dd /mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/occupancy_networks/data/ShapeNet -sd saved_models_grid
    Traceback (most recent call last):
      File "examples/con/train.py", line 218, in <module>
        main()
      File "examples/con/train.py", line 214, in main
        train(dataset, model, optimizer, args)
      File "examples/con/train.py", line 103, in train
        prediction = model(input_points, query_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/pipeline/con.py", line 99, in forward
        features = self.feature_encoder(input_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 275, in forward
        input_points, c, feature_grid=grid_id
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 191, in generate_coordinate_features
        fea_grid = self.feature_processing_fn(fea_grid)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 289, in forward
        x = layer(encoders_features[idx + 1], x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 172, in forward
        x = self.layer(x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 82, in forward
        x = self.relu(self.convolution1(self.group_norm1(x)))
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/normalization.py", line 246, in forward
        input, self.num_groups, self.weight, self.bias, self.eps)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2112, in group_norm
        torch.backends.cudnn.enabled)
    RuntimeError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 31.75 GiB total capacity; 27.60 GiB already allocated; 2.92 GiB free; 27.72 GiB reserved in total by PyTorch)
    

    The env (at mnj) is here: (I run https://github.com/pytorch/pytorch/blob/master/torch/utils/collect_env.py)

    PyTorch version: 1.7.1
    Is debug build: False
    CUDA used to build PyTorch: 10.2
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 18.04.5 LTS (x86_64)
    GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
    Clang version: Could not collect
    CMake version: version 3.10.2
    Libc version: glibc-2.10
    
    Python version: 3.7.4 (default, Aug 13 2019, 20:35:49)  [GCC 7.3.0] (64-bit runtime)
    Python platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
    Is CUDA available: True
    CUDA runtime version: 10.2.89
    GPU models and configuration:
    GPU 0: Tesla V100-SXM2-32GB
    GPU 1: Tesla V100-SXM2-32GB
    
    Nvidia driver version: 460.91.03
    cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    
    Versions of relevant libraries:
    [pip3] numpy==1.20.1
    [pip3] pytorch-pfn-extras==0.3.2
    [pip3] torch==1.7.1
    [pip3] torchtext==0.8.1
    [pip3] torchvision==0.8.2
    [conda] blas                      1.0                         mkl
    [conda] cudatoolkit               10.2.89              hfd86e86_1
    [conda] mkl                       2020.2                      256
    [conda] mkl-service               2.3.0            py37he8ac12f_0
    [conda] mkl_fft                   1.3.0            py37h54f3939_0
    [conda] mkl_random                1.1.1            py37h0573a6f_0
    [conda] numpy                     1.19.2           py37h54aff64_0
    [conda] numpy-base                1.19.2           py37hfa32c7d_0
    [conda] pytorch                   1.7.1           py3.7_cuda10.2.89_cudnn7.6.5_0    pytorch
    [conda] pytorch3d                 0.4.0           py37_cu102_pyt171    pytorch3d
    [conda] torchvision               0.8.2                py37_cu102    pytorch
    
    question high priority 
    opened by soskek 3
  • Add badge for readthedocs.org

    Add badge for readthedocs.org

    Add a badge for displaying the status of the API documentation build.

    Tasks to be completed

    • [ ] Update README.md

    Definition of Done The badge correctly shows up on README.md

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add .readthedocs.yaml

    Add .readthedocs.yaml

    The API documentation successfully builds locally, but not when the project is imported into readthedocs.org.

    Tasks to be completed

    • [ ] Add .readthedocs.yaml

    Definition of Done The documentation successfully builds

    normal priority size-XS 
    opened by mihaimorariu 0
  • Remove post_install.bash

    Remove post_install.bash

    The installation procedure currently requires running the post_install.bash script in order to install torchsearchsorted and torch_scatter. These dependencies should be added to setup.py instead, allowing users to install PyNIF3D simply via pip install -e. The only reason why the post installation script exists is because PyNIF3D has not yet been tested with the newer versions of the two dependencies.

    Tasks to be completed

    • [ ] TODO

    Definition of Done A clear and concise description of the conditions for marking the issue as completed.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Implement the DTU dataset loader for the pixelNeRF paper.

    Tasks to be completed

    • [x] Implement the color jitter
    • [x] Implement the on-the-fly loading
    • [x] Review

    Definition of Done All unit tests are passing.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add pipeline for PixelNeRF

    Add pipeline for PixelNeRF

    Integrate all the components of PixelNeRF into the pipeline.

    Tasks to be completed

    • [ ] Implement PixelNeRF pipeline
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All unit tests are passing.

    feature normal priority size-M 
    opened by mihaimorariu 0
  • Pixel to camera conversion

    Pixel to camera conversion

    Add helper function for pixel to camera conversion.

    Tasks to be completed

    • [ ] Implement the helper function
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All the unit tests are passing.

    feature normal priority size-XS 
    opened by mihaimorariu 0
  • Add pixelNeRF to the repository

    Add pixelNeRF to the repository

    The pixelNeRF paper will be added to the repository: https://arxiv.org/abs/2012.02190

    Tasks to be completed

    • [x] Implement DTU dataset loader
    • [x] Implement the encoder
    • [x] Implement the NIF model
    • [x] Implement the renderer
    • [x] Implement the pipeline
    • [x] Implement the losses
    • [x] Write tutorial on how to use the code
    • [ ] Review

    Definition of Done

    • [x] The results are reproduced
    • [x] Training, evaluation scripts are provided
    • [x] Tutorial is provided
    feature normal priority size-L 
    opened by mihaimorariu 0
  • Support for multi-batch processing in torchsearchsorted

    Support for multi-batch processing in torchsearchsorted

    The implementation of torchsearchsorted that is currently being used does not support multi-batch processing. A for loop in currently being used in NeRF training for handling a batch size larger than one, but that significantly slows down the training process. This needs to be fixed.

    Tasks to be completed

    • [ ] TODO

    Definition of Done Training NeRF with batch size > 1 yields similar PSNR on the evaluation set after removing the for loop and replacing it with a multi-batch-based torchsearchsorted.

    feature low priority size-XS 
    opened by mihaimorariu 0
Releases(0.1)
  • 0.1(Aug 18, 2021)

    Initial version of PyNIF3D.

    Changelog:

    • Added a decoupled structure for NIF-based inference and training
      • Sampling functionalities (ray/pixel/feature)
      • NIF model renderering with generic chunking
      • Aggregation functionalities to generate final pixel/occupancy
    • Added dataset loaders:
      • LLFF
      • NeRF Blender
      • Deep Voxels
      • Shapes3D
      • DTU MVS
    • Added algorithm pipelines:
      • Convolutional Occupancy Networks (CON)
      • Neural Radiance Fields (NeRF)
      • Implicit Differentiable Renderer (IDR)
    • Added encoders:
      • Positional encoding
      • Fourier encoding
    • Added pre-trained models
    • Added generation of rays given camera matrix function
    • Added generic layer generation with bias and weight initializers
    • Added detailed logging structure through decorators
      • If the flag is set to DEBUG, the function inputs/outputs can be logged - this is expected to reduce the debugging duration
    • Added explanatory exceptions and exception messages
    • Added tutorials and sample scripts
    • Added unit tests
    • Added linter
    • Added Sphinx configuration support
    • Added Dockerfile and pip installation support
    • Added comprehensible documentation to each function
    • Added CI support
    Source code(tar.gz)
    Source code(zip)
Owner
Preferred Networks, Inc.
Preferred Networks, Inc.
A code copied from google-research which named motion-imitation was rewrited with PyTorch

motor-system Introduction A code copied from google-research which named motion-imitation was rewrited with PyTorch. More details can get from this pr

NewEra 6 Jan 8, 2022
High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

null 382 Dec 6, 2022
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

micrograd A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural

Andrej 3.5k Jan 8, 2023
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 7, 2023
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 4, 2023
higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of "meta" aspects of these

Facebook Research 1.5k Jan 3, 2023
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 3, 2023
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 7, 2023
S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

Amazon Web Services 138 Jan 3, 2023
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 4, 2023
null 270 Dec 24, 2022
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

null 878 Dec 30, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022