Instant neural graphics primitives: lightning fast NeRF and more

Overview

Instant Neural Graphics Primitives

Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a factory robot? Of course you have!

Here you will find an implementation of four neural graphics primitives, being neural radiance fields (NeRF), signed distance functions (SDFs), neural images, and neural volumes. In each case, we train and render a MLP with multiresolution hash input encoding using the tiny-cuda-nn framework.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
arXiv [cs.GR], Jan 2022
[ Project page ] [ Paper ] [ Video ]

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Requirements

  • Both Windows and Linux are supported.
  • An NVIDIA GPU; tensor cores increase performance when available. All shown results come from an RTX 3090.
  • CUDA v10.2 or higher, a C++14 capable compiler, and CMake v3.19 or higher.
  • (optional) Python 3.7 or higher for interactive bindings. Also, run pip install -r requirements.txt.
    • On some machines, pyexr refuses to install via pip. This can be resolved by installing OpenEXR from here.
  • (optional) OptiX 7.3 or higher for faster mesh SDF training. Set the environment variable OptiX_INSTALL_DIR to the installation directory if it is not discovered automatically.

If you are using Linux, install the following packages

sudo apt-get install build-essential git python3-dev python3-pip libopenexr-dev libxi-dev \
                     libglfw3-dev libglew-dev libomp-dev libxinerama-dev libxcursor-dev

We also recommend installing CUDA and OptiX in /usr/local/ and adding the CUDA installation to your PATH. For example, if you have CUDA 11.4, add the following to your ~/.bashrc

export PATH="/usr/local/cuda-11.4/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH"

Compilation (Windows & Linux)

Begin by cloning this repository and all its submodules using the following command:

$ git clone --recursive https://github.com/nvlabs/instant-ngp
$ cd instant-ngp

Then, use CMake to build the project:

instant-ngp$ cmake . -B build
instant-ngp$ cmake --build build --config RelWithDebInfo -j 16

If the build succeeded, you can now run the code via the build/testbed executable or the scripts/run.py script described below.

If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the TCNN_CUDA_ARCHITECTURES enivonment variable for the GPU you would like to use. Set it to 86 for RTX 3000 cards, 80 for A100 cards, and 75 for RTX 2000 cards.

Interactive training and rendering

This codebase comes with an interactive testbed that includes many features beyond our academic publication:

  • Additional training features, such as extrinsics and intrinsics optimization.
  • Marching cubes for NeRF->Mesh and SDF->Mesh conversion.
  • A spline-based camera path editor to create videos.
  • Debug visualizations of the activations of every neuron input and output.
  • And many more task-specific settings.
  • See also our one minute demonstration video of the tool.

NeRF fox

One test scene is provided in this repository, using a small number of frames from a casually captured phone video:

instant-ngp$ ./build/testbed --scene data/nerf/fox

Alternatively, download any NeRF-compatible scene (e.g. from the NeRF authors' drive). Now you can run:

instant-ngp$ ./build/testbed --scene data/nerf_synthetic/lego

For more information about preparing datasets for use with our NeRF implementation, please see this document.

SDF armadillo

instant-ngp$ ./build/testbed --scene data/sdf/armadillo.obj

Image of Einstein

instant-ngp$ ./build/testbed --scene data/image/albert.exr

To reproduce the gigapixel results, download, for example, the Tokyo image and convert it to .bin using the scripts/image2bin.py script. This custom format improves compatibility and loading speed when resolution is high. Now you can run:

instant-ngp$ ./build/testbed --scene data/image/tokyo.bin

Volume Renderer

Download the nanovdb volume for the Disney cloud, which is derived from here (CC BY-SA 3.0).

instant-ngp$ ./build/testbed --mode volume --scene data/volume/wdas_cloud_quarter.nvdb

Python bindings

To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented. For an example of how the ./build/testbed application can be implemented and extended from within Python, see ./scripts/run.py, which supports a superset of the command line arguments that ./build/testbed does.

Happy hacking!

Thanks

Many thanks to Jonathan Tremblay and Andrew Tao for testing early versions of this codebase and to Arman Toornias and Saurabh Jain for the factory robot dataset.

This project makes use of a number of awesome open source libraries, including:

  • tiny-cuda-nn for fast CUDA MLP networks
  • tinyexr for EXR format support
  • tinyobjloader for OBJ format support
  • stb_image for PNG and JPEG support
  • Dear ImGui an excellent immediate mode GUI library
  • Eigen a C++ template library for linear algebra
  • pybind11 for seamless C++ / Python interop
  • and others! See the dependencies folder.

Many thanks to the authors of these brilliant projects!

License

Copyright © 2022, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License-NC. Click here to view a copy of this license.

Comments
  • I am on pop os and get this error when i am tring to build:

    I am on pop os and get this error when i am tring to build:

    Error Message: CMake Error at /usr/local/share/cmake-3.23/Modules/CMakeDetermineCUDACompiler.cmake:310 (message): CMAKE_CUDA_ARCHITECTURES must be valid if set. Call Stack (most recent call first): CMakeLists.txt:11 (PROJECT) i instaled cuda with this guide: https://support.system76.com/articles/cuda/ I am not sure if i instaled cuda wrong or if i did somthing other wrong cani please get help

    opened by Manni1000 23
  • Got cutlass error: Error Internal at: 363

    Got cutlass error: Error Internal at: 363

    Here are my logs when running the fox example in the testbed (After building successfully following the guidance here:

    16:07:43 INFO     Loading NeRF dataset from
    16:07:43 INFO       data\nerf\fox\transforms.json
    16:07:43 SUCCESS  Loaded 50 images of size 1080x1920 after 0s
    16:07:43 INFO       cam_aabb=[min=[1.0229,-1.33309,-0.378748], max=[2.46175,1.00721,1.41295]]
    16:07:43 INFO     Loading network config from: configs\nerf\base.json
    16:07:43 INFO     GridEncoding:  Nmin=16 b=1.51572 F=2 T=2^19 L=16
    Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
    Warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
    16:07:43 INFO     Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
    16:07:43 INFO     Color model:   3--[SphericalHarmonics]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
    16:07:43 INFO       total_encoding_params=13074912 total_network_params=9728
    Got cutlass error: Error Internal at: 363
    Could not free memory: C:\projects\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/gpu_memory.h:444 cudaDeviceSynchronize() failed with error operation not permitted when stream is capturing
    

    Information: OS: windows 10 CUDA version: cuda_11.6.r11.6/compiler.30794723_0 MSVC version: 19.29.30140 GPU: GTX 1080 Ti

    opened by JustASquid 21
  • pyngp, ImportError on pyngp.cpython-37m-x86_64-linux-gnu.so: undefined symbol

    pyngp, ImportError on pyngp.cpython-37m-x86_64-linux-gnu.so: undefined symbol

    I'm trying to run it on Google Colab, and I'm following this guide: https://github.com/NVlabs/instant-ngp/blob/master/notebooks/instant_ngp.ipynb Everything builds without any errors, when I try to run the run.py code (which is the 8th step of the mentioned link) I get this error:

    Traceback (most recent call last):
      File "scripts/run.py", line 29, in <module>
        import pyngp as ngp # noqa
    ImportError: /content/instant-ngp/build/pyngp.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK3ngp7Testbed16crop_box_cornersEb
    
    

    I saw this issue in which the compiler couldn't find pyngp and there was a suggestion to specify the build folder #43 , so I also tried adding the lines to specify the build folder in run.py, but still no luck.

    pyngp_path = '\build'
    sys.path.append(pyngp_path)
    import pyngp as ngp  # this line should already be there near the top of the code
    
    

    I'd appreciate it if someone could help me with this.

    opened by smtabatabaie 18
  • Build problem on Windows: No known features for CXX compiler

    Build problem on Windows: No known features for CXX compiler "MSVC"

    When running cmake . -B build I get

    CMake Error in CMakeLists.txt: No known features for CXX compiler

    "MSVC"

    version 19.29.30136.0.

    What features is it missing? Where do I look for that?

    opened by ifilipis 16
  • Dockerfile request

    Dockerfile request

    I can't get it to build properly, it would be nice to have a docker image and or dev container to run it. Saves everyone a lot of time trying to build the project.

    opened by Gigabyte0x1337 16
  • Problem with Building

    Problem with Building

    Log file: https://pastebin.com/uZCE9uE0 Won't compile successfully. Using Windows 10 and Python 3.9.10 Just downloaded lastest CMake, CUDA and OptiX GTX 1660 SUPER

    opened by DuckyBlender 15
  •  Turing Tensor Core operations must be run on a machine with compute capability at least 75

    Turing Tensor Core operations must be run on a machine with compute capability at least 75

    Hi

    Thanks for sharing the great work. Although the code can be compiled, it fails when running the testbed. Does it mean that there is no way to support running this code on some older GPUs, e.g. Tesla V100 which seems architecture 70. Are there any alternatives?

    Thanks

    opened by MultiPath 15
  • Cuda not found on windows 10

    Cuda not found on windows 10

    Hi, I wanted to build the project on windows 10 but got some issue when preparing the build.

    I got :

    -- Building for: Visual Studio 16 2019
    -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
    -- The C compiler identification is MSVC 19.29.30136.0
    -- The CXX compiler identification is MSVC 19.29.30136.0
    -- The CUDA compiler identification is unknown
    CMake Error at G:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDetermineCUDACompiler.cmake:633 (message):
      Failed to detect a default CUDA architecture.
    
    
    
      Compiler output:
    
    Call Stack (most recent call first):
      CMakeLists.txt:11 (PROJECT)
    

    I did reinstall CUDA and update it to the path to the latest version with the visual studio integration. And add the files in the VS directory as expected.

    nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Tue_Mar__8_18:36:24_Pacific_Standard_Time_2022
    Cuda compilation tools, release 11.6, V11.6.124
    Build cuda_11.6.r11.6/compiler.31057947_0
    

    Thanks in advance

    opened by Fqlox 14
  • Build problem: No CUDA toolset found

    Build problem: No CUDA toolset found

    I have the following error when trying to build on windows: image

    My environment: window 10 cmake 3.22 Visual Studio 2019 CUDA 11.3

    Does anyone know what's going on?

    opened by WXuanyang 14
  • How to use underlying representation for a new task with different input/output dimensions?

    How to use underlying representation for a new task with different input/output dimensions?

    This code and project are awesome! Thanks a lot.

    In terms of building upon this, I wonder how to access, edit and train the underlying hash+NN representation for a new task.

    For example, let's say I have a task with different number of input or output coordinates, e.g. some special video++ like representation which should be directly fitted like the image is done in the demo (e.g. 3 input coordinates (x,y,t) and 5 output coordinates (r,g,b,a,b)).

    Do I have access through this code (ideally python bindings, but if not then some c code), to edit the number of input and output coordinates, to provide my own training data which fits with these input/output coordinates, and train your hash+NN representation on a new task?

    If this is possible with the code, then pointers on how to do this would be very much appreciated. I'm currently very lost with how to expose this ability in your code base.

    Making this easier to do (e.g. easy python bindings), I'm sure would be greatly appreciated by the research community, in order to be able to build upon this awesome work as easily as possible.

    Cheers, Jonathon

    opened by JonathonLuiten 14
  • GUI application does not work with X11 forwarding

    GUI application does not work with X11 forwarding

    I am trying to launch testbed application in gui mode on remote ubuntu server and forward display to local Mac with XQuartz. The application fails to start with the following output:

    03:23:56 INFO Loading NeRF dataset from 03:23:56 WARNING data/nerf/fox/base_cam.json does not contain any frames. Skipping. 03:23:56 INFO data/nerf/fox/valid.json 03:23:56 INFO data/nerf/fox/transforms.json 03:23:56 INFO data/nerf/fox/camera.json 03:23:56 SUCCESS Loaded 53 images after 0s 03:23:56 INFO cam_aabb=[min=[1.0229,-1.33309,-0.378748], max=[2.46175,1.00721,1.41295]] 03:23:56 INFO Loading network config from: configs/nerf/base.json 03:23:56 INFO GridEncoding: Nmin=16 b=1.51572 F=2 T=2^19 L=16 03:23:56 INFO Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1 03:23:56 INFO Color model: 3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3 03:23:56 INFO total_encoding_params=13074912 total_network_params=10240 03:24:02 WARNING Vulkan: loader_icd_scan: Can not find 'ICD' object in ICD JSON file /usr/share/vulkan/icd.d/nvidia_layers.json. Skipping ICD JSON 03:24:02 WARNING Could not initialize Vulkan and NGX. DLSS not supported. (Buffer device address extension not available.) libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast 03:24:03 ERROR GLFW error #65543: GLX: Forward compatibility requested but GLX_ARB_create_context_profile is unavailable 03:24:03 ERROR Uncaught exception: GLFW window could not be created.

    Has anyone encountered the same problem? Any help would be greatly appreciated.

    opened by chekhovana 13
  • Uncaught exception error in cutlass_matmul.h

    Uncaught exception error in cutlass_matmul.h

    Hello everyone, I am facing a cutlass exception error in cutlass_matmul.h script. I looked all the already available answers regarding that and they are not helpful in resolving this. System : Windows 10, Visual Studio 2019 GPU: 1080, Cuda: 11.6 Also using updated master branch of instant-ngp too How did you solve this error. I am facing the same problem. I am using Windows 10 and old GPU (1080). image

    Is it related to Cuda? 1080 have Architecture 61. Not sure which version of cuda should I use if not 11.6?

    opened by nikhilmakkar 3
  • MAKE_CUDA_ARCHITECTURES must be non-empty if set.

    MAKE_CUDA_ARCHITECTURES must be non-empty if set.

    Getting this error when trying to compile it on Ubuntu. Cuda is installed ( v 11.3), did everything according to the readme. What might be wrong?

    Full error text:

    CMake Error at /usr/share/cmake-3.24/Modules/CMakeDetermineCUDACompiler.cmake:277 (message): CMAKE_CUDA_ARCHITECTURES must be non-empty if set. Call Stack (most recent call first): CMakeLists.txt:11 (project)

    -- Configuring incomplete, errors occurred! See also "/home/narvi/apps/instant-ngp/build/CMakeFiles/CMakeOutput.log". See also "/home/narvi/apps/instant-ngp/build/CMakeFiles/CMakeError.log".

    opened by narviii 2
  • Differences between Colmap-Gen data and nerf_synthetic

    Differences between Colmap-Gen data and nerf_synthetic

    Hello, thank you for your great work. I want to know what's the difference between coordinates of colmap2nerf and nerf_synthetic data? I tried the camera respectively in explicitly rendering the reconstructed scenes but looks like nerf_synthetic works but colmap data failed.

    opened by Exusial 0
  • ERROR    Uncaught exception:

    ERROR Uncaught exception:

    HI, I could not train ./build/instant-ngp --scene data/nerf/fox on latest version ( previous version I can testbed) I hvae error : Uncaught exception: instant-ngp/src/testbed_nerf.cu:2694 cudaMemsetAsync(density_grid_tmp, 0, sizeof(float)*n_elements, stream) failed with error invalid argumen

    How to fix it ?

    opened by yukaariki 0
  • cuMemSetAccess Entry Point

    cuMemSetAccess Entry Point

    When I try to execute testbe.exe on windows (.\build\testbed -- scene data\nerf\fox), I get the following error: Entry Point Not Found, The Procedure entry point cuMemSetAccess Entry Point Could Not Be Located The Dynamic Link Library.

    You know why?

    Thanks

    opened by ElSayedMMostafa 1
Owner
NVIDIA Research Projects
NVIDIA Research Projects
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 6, 2023
Unofficial pytorch-lightning implement of Mip-NeRF

mipnerf_pl Unofficial pytorch-lightning implement of Mip-NeRF, Here are some results generated by this repository (pre-trained models are provided bel

Jianxin Huang 159 Dec 23, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 9, 2023
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Google 625 Dec 30, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

null 524 Jan 8, 2023
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 8, 2023
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 2, 2023
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

null 6.5k Jan 1, 2023
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 7, 2023
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Qiangeng Xu 662 Jan 1, 2023
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers

ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers Official implementation of ViewFormer. ViewFormer is a NeRF-free neural rend

Jonáš Kulhánek 169 Dec 30, 2022
Instant Real-Time Example-Based Style Transfer to Facial Videos

FaceBlit: Instant Real-Time Example-Based Style Transfer to Facial Videos The official implementation of FaceBlit: Instant Real-Time Example-Based Sty

Aneta Texler 131 Dec 19, 2022
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

null 34 Dec 31, 2022