Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Overview

Non-Rigid Neural Radiance Fields

This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video" (NR-NeRF). We extend NeRF, a state-of-the-art method for photorealistic appearance and geometry reconstruction of a static scene, to deforming/non-rigid scenes. For details, we refer to the preprint and the project page, which also includes supplemental videos.

Pipeline figure

Getting Started

Installation

  • Clone this repository.
  • Setup the conda environment nrnerf (or install the requirements using pip):
conda env create -f environment.yml
  • (Optional) For data loading and camera parameter estimation, we have included a dummy implementation that only works on the included example sequence. If you do not want to write your own implementation as specified at the end of this README, you can instead use the following programs and files:
    • Install COLMAP.
    • From nerf-pytorch, use load_llff.py to replace the example version included in this repo.
      • In load_llff_data(), replace sc = 1. if bd_factor is None else 1./(bds.min() * bd_factor) with sc = 1./(bds.max() - bds.min())
    • From LLFF, copy from llff/poses/ the three files colmap_read_model.py, colmap_wrapper.py, and pose_utils.py directly into ./llff_preprocessing (replacing existing files).
      • In pose_utils.py fix the imports by:
        • Commenting out import skimage.transform,
        • Replacing from llff.poses.colmap_wrapper import run_colmap with from .colmap_wrapper import run_colmap,
        • Replacing import llff.poses.colmap_read_model as read_model with from . import colmap_read_model as read_model.
  • (Optional) An installation of FFMPEG enables automatic video generation from images and frame extraction from video input.
conda install -c conda-forge ffmpeg

Walkthrough With an Example Sequence

Having set up the environment, we now show an example that starts with a folder of just images and ends up with a fixed viewpoint re-rendering of the sequence. Please read the sections after this one for details on each step and how to adapt the pipeline to other sequences.

We first navigate into the parent folder (where train.py etc. lie) and activate the conda environment:

conda activate nrnerf

(Preprocess) We then determine the camera parameters:

python preprocess.py --input data/example_sequence/

(Training) Next, we train the model with the scene-specific config:

python train.py --config configs/example_sequence.txt

(Free Viewpoint Rendering) Finally, we synthesize a novel camera path:

python free_viewpoint_rendering.py --input experiments/experiment_1/ --deformations train --camera_path fixed --fixed_view 10

All results will be in the same folder, experiments/experiment_1/output/train_fixed_10/.

Overall, the input video (left) is re-rendered into a fixed novel view (right):

Novel view synthesis result on example sequence

Convenience Features

  • Works with video file input,
  • Script for lens distortion estimation and undistortion of input files,
  • Automatic multi-GPU support (torch.nn.DataParallel),
  • Automatically continues training if previous training detected,
  • Some modifications to lessen GPU memory requirements and to speed-up loading at the start of training.

Practical Tips for Recording Scenes

As this is a research project, it is not sufficiently robust to work on arbitrary scenes. Here are some tips to consider when recordings new scenes:

  • Sequences should have lengths of about 100-300 frames. More frames require longer training.
  • Avoid blur (e.g., motion blur or out-of-focus blur).
  • Keep camera settings like color temperature and focal length fixed.
  • Avoid lens distortions or estimate distortion parameters for undistortion.
  • Stick to front-facing camera paths that capture most of the scene in all images.
  • Use sufficient lighting and avoid changing it while recording.
  • Avoid self-shadowing.
  • Only record Lambertian surfaces, avoid view-dependent effects like specularities (view-dependent effects can be activated by setting use_viewdirs=True).
  • The background needs to be static and dominant enough for SfM to estimate extrinsics.
  • Limited scene size: Ensure that the background is not more than an order of magnitude further from the camera compared to the non-rigid foreground.

Using the Code

Preprocess

Determining Camera Parameters

Before we can train a network on a newly recorded sequence, we need to estimate its camera parameters (extrinsics and intrinsics).

The preprocessing code assumes the folder structure PARENT_FOLDER/images/IMAGE_NAME1.png. To determine the camera parameters for such a sequence, please run

python preprocess.py --input PARENT_FOLDER

The --output OUTPUT_FOLDER option allows to set a custom output folder, otherwise PARENT_FOLDER is used by default.

(Optional) Lens Distortion Estimation and Image Undistortion

While not necessary for decent results with most camera lenses, the preprocessing code allows to estimate lens distortions from a checkerboard/chessboard sequence and to then use the estimated distortion parameters to undistort input sequences recorded with the same camera.

First, record a checkerboard sequence and run the following command to estimate lens distortion parameters from it:

python preprocess.py --calibrate_lens_distortion --input PARENT_FOLDER --checkerboard_width WIDTH --checkerboard_height HEIGHT

The calibration code uses OpenCV. HEIGHT and WIDTH refer to the number of squares, not to lengths. The optional flags --visualize_detections and --undistort_calibration_images might help with determining issues with the calibration process, see preprocess.py for details.

Then, in order to undistort an input sequence using the computed parameters, simply add --undistort_with_calibration_file PATH_TO_LENS_DISTORTION_JSON when preprocessing the sequence using preprocess.py as described under Determining Camera Parameters.

(Optional) Video Input

In addition to image files, the preprocessing code in preprocess.py also supports video input. Simply set --input to the video file.

This requires an installation of ffmpeg. The --ffmpeg_path PATH_TO_EXECUTABLE option allows to set a custom path to an ffmpeg executable.

The --fps 10 option can be used to modify the framerate at which images are extracted from the video. The default is 5.

Training

The config file default.txt needs to be modified as follows:

  • rootdir: An output folder that collects all experiments (i.e. multiple trainings)
  • datadir: Recorded input sequence. Set to PARENT_FOLDER from the Preprocess section above
  • expname: Name of this experiment. Output will be written to rootdir/expname/

Other relevant parameters are:

  • offsets_loss_weight, divergence_loss_weight, rigidity_loss_weight: Weights for loss terms. Need to be tuned for each scene, see the preprint for details.
  • factor: Downsamples the input sequence by factor before training on it.
  • use_viewdirs: Set to True to activate view-dependent effects. Note that this slows down training by about 20% (approximate) or 35% (exact) on a V100 GPU.
  • approx_nonrigid_viewdirs: True uses a fast finite difference approximation of the view direction, False computes the exact direction.

Finally, start the training by running:

python train.py

A custom config file can optionally be passed via --config CONFIG_FILE.

The train_block_size and test_block_size options allow to split the images into training and test blocks. The scheme is AAAAABBAAAAABBAAA for train_block_size=5 and test_block_size=2. Note that optimizing for the latent codes of test images slows down training by about 30% (relative to only using training images) due to an additional backwards pass.

If a previous version of the experiment exists, train.py will automatically continue training from it. To prevent that, pass the --no_reload flag.

Free Viewpoint Rendering

Once we've trained a network, we can render it into novel views.

The following arguments are mandatory:

  • input: Set to the folder of the trained network, i.e. rootdir/expname/
  • deformations: Set to the subset of the deformations/images that are to be used. Can be train, test, or all
  • camera_path: Possible camera paths are: input_recontruction, fixed, and spiral.

Then, we can synthesize novel views by running:

python free_viewpoint_rendering.py --input INPUT --deformations train --camera_path fixed

The fixed camera view uses the first input view by default. This can be set to another index (e.g. 5) with --fixed_view 5.

Furthermore, the forced background stabilization described in the preprint can be used by passing a threshold via the --forced_background_stabilization 0.01 option. The canonical model (without any ray bending applied) can be rendered by setting the --render_canonical flag. Finally, the framerate of the generated output videos can be set with --output_video_fps 5.

For automatic video generation, please install ffmpeg.

(Optional) Adaptive Spiral Camera Path

It is also possible to use a spiral camera path that adapts to the length of the video. If you do not want to implement such a path yourself, you can copy and modify the else branch in load_llff_data of load_llff.py. You can find a recommended wrapper in free_viewpoint_rendering: _spiral_poses. Set N_views to num_poses. We recommend multiplying rads in render_path_spiral right before the for loop by 0.5.

Cite

When using this code, please cite our preprint Tretschk et al.: Non-Rigid Neural Radiance Fields as well as the following works on which it builds:

@misc{tretschk2020nonrigid,
      title={Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video},
      author={Edgar Tretschk and Ayush Tewari and Vladislav Golyanik and Michael Zollhöfer and Christoph Lassner and Christian Theobalt},
      year={2020},
      eprint={2012.12247},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{lin2020nerfpytorch,
  title={NeRF-pytorch},
  author={Yen-Chen, Lin},
  howpublished={\url{https://github.com/yenchenlin/nerf-pytorch/}},
  year={2020}
}
@inproceedings{mildenhall2020nerf,
 title={NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis},
 author={Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng},
 year={2020},
 booktitle={ECCV},
}

Specification of Missing Functions

load_llff_data from load_llff.py needs to return a numpy array images of shape N x H x W x 3 with RGB values scaled to lie between 0 and 1, a numpy array poses of shape N x 3 x 5, where poses[:,:,:3] are the camera extrinsic rotations, poses[:,:,3] are the camera extrinsic translations in world units, and poses[:,:,4] are height, width, focal length in pixels at every frame (the same at all N frames), bds is a numpy array containing the depth values of near and far planes in world units (only the maximum and minimum entries of bds matter), render_poses is a numpy array of shape N x 3 x 4 with rotation and translation encoded as for poses, and i_test is an image index. The first argument specifies the directory from which the images should be loaded, and the second argument specifies a downsampling factor that should be applied to the images. The remaining arguments can be ignored.

gen_poses from llff_preprocessing/pose_utils.py should compute and store camera parameters of the images given by the first argument such that the format is compatible with load_llff_data. The second argument can be ignored.

The camera extrinsic translation is in world space. The translations should be scaled such that the overall scene roughly lies in the unit cube. The camera extrinsic rotation is camera-to-world, R * c = w. The camera coordinate system has the x-axis pointing to the right, y up, and z back.

License

This code builds on the PyTorch port by Yen-Chen Lin of the original NeRF code. Both are released under an MIT license. Several functions in run_nerf_helpers.py are modified versions from the FFJORD code, which is released under an MIT license. We thank all of them for releasing their code.

We release this code under an MIT license as well. You can find all licenses in the file LICENSE.

Comments
  • What's the render_poses meaning?

    What's the render_poses meaning?

    Thanks for your great work! When I ran training.py by using the example_sequence data, I found the loaded data are from precomputed.json, which you provide. I'd like to train other datasets, so could you tell how to compute the precomputed.json? By the way, from precomputed.json, what's the render_poses meaning? And what's the difference between poses and render_poses? Thank you so much!

    opened by wtishere 3
  • fix divergence_exact's diagonal sum

    fix divergence_exact's diagonal sum

    In run_nerf_helpers.py: L71 divergence_loss, the original code to calculate the diagonal of jacobian matrix is wrong.

    I'm guessing since it's using divergence_approx function by default, this error does not affect the default training behavior. But still it will affect the behavior when setting exact=true for training.

    example:

    jac[0,...]=
            [[  1.5425,   8.1350,   5.7477],
            [ -1.1500, -14.2937,  -9.5665],
            [ -0.5321,  11.8239,   5.3827]]
    
    jac.view(jac.shape[0], -1)[0,...]=
             [  1.5425, 8.1350, 5.7477, -1.1500, -14.2937, -9.5665, -0.5321, 11.8239, 5.3827]
    
    # wrong way
    jac.view(jac.shape[0], -1)[0, :: jac.shape[1]]=
             [ 1.5425, -1.1500, -0.5321]
    
    # correct way
    jac.view(jac.shape[0], -1)[0, :: (jac.shape[1]+1)]=
             [  1.5425, -14.2937,   5.3827]
    
    opened by ventusff 3
  • Canonical Volume

    Canonical Volume

    Hi. Thanks a lot for providing the code for this work! I'm a little confused about the canonical volume, is there only one canonical volume modeled for a sequence of images? NR-NeRF doesn't seem to be able to model large non-rigid motions, such as a scene where a person is dancing.

    opened by caiyongqi 2
  • Why there is an additional channel when N_importance > 0?

    Why there is an additional channel when N_importance > 0?

    Hi, thanks for you excellent work.

    What's the 5th channel uesd for when N_importance > 0?

    https://github.com/facebookresearch/nonrigid_nerf/blob/5187abfe9b9e30e45cb87087f94ff1e00fb4450c/train.py#L593

    opened by FreemanG 2
  • fix divergence_exact's diagonal sum

    fix divergence_exact's diagonal sum

    In run_nerf_helpers.py: L71 divergence_loss, the original code to calculate the diagonal of jacobian matrix is wrong.

    I'm guessing since it's using divergence_approx function by default, this error does not affect the default training behavior. But still it will affect the behavior when setting exact=true for training.

    example:

    jac[0,...]=
            [[  1.5425,   8.1350,   5.7477],
            [ -1.1500, -14.2937,  -9.5665],
            [ -0.5321,  11.8239,   5.3827]]
    
    jac.view(jac.shape[0], -1)[0,...]=
             [  1.5425, 8.1350, 5.7477, -1.1500, -14.2937, -9.5665, -0.5321, 11.8239, 5.3827]
    
    # wrong way
    jac.view(jac.shape[0], -1)[0, :: jac.shape[1]]=
             [ 1.5425, -1.1500, -0.5321]
    
    # correct way
    jac.view(jac.shape[0], -1)[0, :: (jac.shape[1]+1)]=
             [  1.5425, -14.2937,   5.3827]
    
    CLA Signed 
    opened by ventusff 1
  • A typo in free_viewpoint_rendering.py

    A typo in free_viewpoint_rendering.py

    File "free_viewpoint_rendering.py", line 98, in load_llff_dataset extras = _get_multi_view_helper_mappings(images.shape[0], datatdir) NameError: name 'datatdir' is not defined

    I guess the 'datatdir' should be datadir.

    opened by FreemanG 0
  • AssertionError: Torch not compiled with CUDA enabled

    AssertionError: Torch not compiled with CUDA enabled

    I tried to run train.py . However, the following error is encountered. AssertionError: Torch not compiled with CUDA enabled Can anyone help me?Thanks a lot.

    opened by cjj0000 1
  • COLMAP running error in remote server while running preprocess.py

    COLMAP running error in remote server while running preprocess.py

    While running a preprocess file which return poses from images by SfM using COLMAP. I was getting the following error while executing the preprocessing in a remote server. Can anyone please help me with solving this?

    python preprocess.py --input data/example_sequence1/

    Need to run COLMAP
    qt.qpa.xcb: could not connect to display 
    qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
    This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
    
    Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc, webgl, xcb.
    
    *** Aborted at 1660905461 (unix time) try "date -d @1660905461" if you are using GNU date ***
    PC: @                0x0 (unknown)
    *** SIGABRT (@0x3e900138a9f) received by PID 1280671 (TID 0x7f5740d49000) from PID 1280671; stack trace: ***
        @     0x7f57463a2197 google::(anonymous namespace)::FailureSignalHandler()
        @     0x7f574421f420 (unknown)
        @     0x7f5743bf300b gsignal
        @     0x7f5743bd2859 abort
        @     0x7f57442be35b QMessageLogger::fatal()
        @     0x7f574477c799 QGuiApplicationPrivate::createPlatformIntegration()
        @     0x7f574477cb6f QGuiApplicationPrivate::createEventDispatcher()
        @     0x7f57443dbb62 QCoreApplicationPrivate::init()
        @     0x7f574477d1e1 QGuiApplicationPrivate::init()
        @     0x7f5744c03bc5 QApplicationPrivate::init()
        @     0x562bbb634975 colmap::RunFeatureExtractor()
        @     0x562bbb61d1a0 main
        @     0x7f5743bd4083 __libc_start_main
        @     0x562bbb620e39 (unknown)
    Traceback (most recent call last):
      File "imgs2poses.py", line 18, in <module>
        gen_poses(args.scenedir, args.match_type)
      File "/data1/user_data/ashish/NeRF/LLFF/llff/poses/pose_utils.py", line 268, in gen_poses
        run_colmap(basedir, match_type)
      File "/data1/user_data/ashish/NeRF/LLFF/llff/poses/colmap_wrapper.py", line 35, in run_colmap
        feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) )
      File "/home/ashish/anaconda3/envs/nrnerf/lib/python3.6/subprocess.py", line 356, in check_output
        **kwargs).stdout
      File "/home/ashish/anaconda3/envs/nrnerf/lib/python3.6/subprocess.py", line 438, in run
        output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command '['colmap', 'feature_extractor', '--database_path', 'scenedir/database.db', '--image_path', 'scenedir/images', '--ImageReader.single_camera', '1']' died with <Signals.SIGABRT: 6>.
    
    opened by suneelkumarpentela 0
  • Missing precomputed.json output

    Missing precomputed.json output

    Hi @edgar-tr,

    Thanks a lot for providing the code for this work! I'd like to run it on sequential data that has been calibrated using the provided preprocessing tool.

    Unfortunately, the precomputed.json file that is required for poses and everything is not generated. Do you know what could be the problem? I somehow cannot find the code that produces that file in the preprocess.py.

    Thank your very much!

    opened by weders 1
  • Question about data release

    Question about data release

    Hi, Excellent work and thank you for sharing your code! I am wondering if you have any plan to release all the data used in your paper? It seems you only release one example sequence used in your paper. Do you have any plan to release the complete dataset? Thank you very much! Best

    opened by Jia-Wei-Liu 1
  • bounds and downsampling factor for load_llff_data_multi_view

    bounds and downsampling factor for load_llff_data_multi_view

    First of all, thank you for releasing your impactful work! I'm trying to train NRNeRF on multi-view data from 8 synchronized cameras with known intrinsics and extrinsics, and I ran into a couple questions regarding the bounds and the downsampling factor.

    1. Are the parameters min_bound and max_bound defined as the minimum and maximum across all cameras?

    I noticed that in the README.md, there is a single min_bound and max_bound that is shared between all cameras when specifying calibration.json, as opposed to there being one for each camera.

    2. When using load_llff_data_multi_view, if our training images are downsampled from their original resolution by a certain factor, are there any parts of the calibration.json (i.e. camera intrinsics / extrinsics) we have to accordingly adjust to account for the downsampling factor?

    I'm asking this question because that downsampling images by a factor is not implemented in load_llff_data_multi_view, but load_llff_data appears to be using factor in a couple of cases (https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L76, https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/load_llff.py#L103).

    Thank you in advance for reading this long question. I look forward to reading your response.

    opened by andrewsonga 5
Owner
Facebook Research
Facebook Research
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

null 111 Dec 29, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

null 331 Dec 28, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

null 44 Dec 20, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 7, 2023
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

null 118 Dec 26, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

GRAF This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. You can find detailed usage i

null 349 Dec 29, 2022
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

null 235 Dec 26, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 1, 2023
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 9, 2023
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

null 524 Jan 8, 2023
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 2, 2023