pixelNeRF: Neural Radiance Fields from One or Few Images

Overview

pixelNeRF: Neural Radiance Fields from One or Few Images

Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
UC Berkeley

Teaser

arXiv: http://arxiv.org/abs/2012.02190

This is the official repository for our paper, pixelNeRF, pending final release. The two object experiment is still missing. Several features may also be added.

Environment setup

To start, we prefer creating the environment using conda:

conda env create -f environment.yml
conda activate pixelnerf

Please make sure you have up-to-date NVIDIA drivers supporting CUDA 10.2 at least.

Alternatively use pip -r requirements.txt.

Getting the data

While we could have used a common data format, we chose to keep DTU and ShapeNet (NMR) datasets in DVR's format and SRN data in the original SRN format. Our own two-object data is in NeRF's format. Data adapters are built into the code.

Running the model (video generation)

The main implementation is in the src/ directory, while evalutation scripts are in eval/.

First, download all pretrained weight files from https://drive.google.com/file/d/1UO_rL201guN6euoWkCOn-XpqR2e8o6ju/view?usp=sharing. Extract this to /checkpoints/ , so that /checkpoints/dtu/pixel_nerf_latest exists.

ShapeNet Multiple Categories (NMR)

  1. Download NMR ShapeNet renderings (see Datasets section, 1st link)
  2. Run using
    • python eval/gen_video.py -n sn64 --gpu_id --split test -P '2' -D /NMR_Dataset -S 0
    • For unseen category generalization: python eval/gen_video.py -n sn64_unseen --gpu_id= --split test -P '2' -D /NMR_Dataset -S 0

Replace with desired GPU id(s), space separated for multiple. Replace -S 0 with -S to run on a different ShapeNet object id. Replace -P '2' with -P ' ' to use a different input view. Replace --split test with --split train | val to use different data split. Append -R=20000 if running out of memory.

Result will be at visuals/sn64/videot .mp4 or visuals/sn64_unseen/videot .mp4 . The script will also print the path.

Pre-generated results for all ShapeNet objects with comparison may be found at https://www.ocf.berkeley.edu/~sxyu/ZG9yaWF0aA/pixelnerf/cross_v2/

ShapeNet Single-Category (SRN)

  1. Download SRN car (or chair) dataset from Google drive folder in Datasets section. Extract to /cars_
  2. python eval/gen_video.py -n srn_car --gpu_id= --split test -P '64 104' -D /cars -S 1

Use -P 64 for 1-view (view numbers are from SRN). The chair set case is analogous (replace car with chair). Our models are trained with random 1/2 views per batch during training. This seems to degrade performance especially for 1-view. It may be preferrable to use a fixed number of views instead.

DTU

Make sure you have downloaded the pretrained weights above.

  1. Download DTU dataset from Google drive folder in Datasets section. Extract to some directory, to get: /rs_dtu_4
  2. Run using python eval/gen_video.py -n dtu --gpu_id= --split val -P '22 25 28' -D /rs_dtu_4 -S 3 --scale 0.25

Replace with desired GPU id(s). Replace -S 3 with -S to run on a different scene. This is not DTU scene number but 0-14 in the val set. Remove --scale 0.25 to render at full resolution (quite slow).

Result will be at visuals/dtu/videov .mp4. The script will also print the path.

Note that for DTU, I only use train/val sets, where val is used for test. This is due to the very small size of the dataset. The model overfits to the train set significantly during training.

Real Car Images

Note: requires PointRend from detectron2. Install detectron2 by following https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md.

Make sure you have downloaded the pretrained weights above.

  1. Download any car image. Place it in /input . Some example images are shipped with the repo. The car should be fully visible.
  2. Run the preprocessor script: python scripts/preproc.py. This saves input/*_normalize.png. If the result is not reasonable, PointRend didn't work; please try another imge.
  3. Run python eval/eval_real.py. Outputs will be in /output

The Stanford Car dataset contains many example car images: https://ai.stanford.edu/~jkrause/cars/car_dataset.html. Note the normalization heuristic has been slightly modified compared to the paper. There may be some minor differences. You can pass -e -20 to eval_real.py to set the elevation higher in the generated video.

Overview of flags

Generally, all scripts in the project take the following flags

  • -n : experiment name, matching checkpoint directory name
  • -D : dataset directory. To save typing, you can set a default data directory for each expname in expconf.conf under datadir. For SRN/multi_obj datasets with separate directories e.g. path/cars_train, path/cars_val, put -D path/cars.
  • --split : data set split
  • -S : scene or object id to render
  • --gpu_id : GPU id(s) to use, space delimited. All scripts except calc_metrics.py are parallelized. If not specified, uses GPU 0. Examples: --gpu_id=0 or --gpu_id='0 1 3'.
  • -R : Batch size of rendered rays per object. Default is 50000 (eval) and 128 (train); make it smaller if you run out of memory. On large-memory GPUs, you can set it to 100000 for eval.
  • -c : config file. Automatically inferred for the provided experiments from the expname. Thus the flag is only required when working with your own expnames. You can associate a config file with any additional expnames in the config section of /expconf.conf .

Please refer the the following table for a list of provided experiments with associated config and data files:

Name expname -n config -c (automatic from expconf.conf) Data file data dir -D
ShapeNet category-agnostic sn64 conf/exp/sn64.conf NMR_Dataset.zip (from AWS) path/NMR_Dataset
ShapeNet unseen category sn64_unseen conf/exp/sn64_unseen.conf NMR_Dataset.zip (from AWS) + genlist.py path/NMR_Dataset
SRN chairs srn_chair conf/exp/srn.conf srn_chairs.zip path/chairs
SRN cars srn_car conf/exp/srn.conf srn_cars.zip path/cars
DTU dtu conf/exp/dtu.conf dtu_dataset.zip path/rs_dtu_4
Two chairs mult_obj conf/exp/mult_obj.conf multi_chair_{train/val/test}.zip path

Quantitative evaluation instructions

All evaluation code is in eval/ directory. The full, parallelized evaluation code is in eval/eval.py.

Approximate Evaluation

The full evaluation can be extremely slow (taking many days), especially for the SRN dataset. Therefore we also provide eval_approx.py for approximate evaluation.

  • Example python eval/eval_approx.py -D /cars -n srn_car

Add --seed to try a different random seed.

Full Evaluation

Here we provide commands for full evaluation with eval/eval.py. After running this you should also use eval/calc_metrics.py, described in the section below, to obtain final metrics.

Append --gpu_id= to specify GPUs, for example --gpu_id=0 or --gpu_id='0 1 3'. It is highly recommended to use multiple GPUs if possible to finish in reasonable time. We use 4-10 for evaluations as available. Resume-capability is built-in, and you can simply run the command again to resume if the process is terminated.

In all cases, a source-view specification is required. This can be either -P or -L. -P 'view1 view2..' specifies a set of fixed input views. In contrast, -L should point to a viewlist file (viewlist/src_*.txt) which specifies views to use for each object.

Renderings and progress will be saved to the output directory, specified by -O .

ShapeNet Multiple Categories (NMR)

  • Category-agnostic eval python eval/eval.py -D /NMR_Dataset -n sn64 -L viewlist/src_dvr.txt --multicat -O eval_out/sn64
  • Unseen category eval python eval/eval.py -D /NMR_Dataset -n sn64_unseen -L viewlist/src_gen.txt --multicat -O eval_out/sn64_unseen

ShapeNet Single-Category (SRN)

  • SRN car 1-view eval python eval/eval.py -D /cars -n srn_car -P '64' -O eval_out/srn_car_1v
  • SRN car 2-view eval python eval/eval.py -D /cars -n srn_car -P '64 104' -O eval_out/srn_car_2v

The command for chair is analogous (replace car with chair). The input views 64, 104 are taken from SRN. Our method is by no means restricted to using such views.

DTU

  • 1-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '25' -O eval_out/dtu_1v
  • 3-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28' -O eval_out/dtu_3v
  • 6-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28 40 44 48' -O eval_out/dtu_6v
  • 9-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28 40 44 48 0 8 13' -O eval_out/dtu_9v

In training, we always provide 3-views, so the improvement with more views is limited.

Final Metric Computation

The above computes PSNR and SSIM without quantization. The final metrics we report in the paper use the rendered images saved to disk, and also includes LPIPS + category breakdown. To do so run the eval/calc_metrics.py, as in the following examples

  • NMR ShapeNet experiment: python eval/calc_metrics.py -D /NMR_Dataset -O eval_out/sn64 -F dvr --list_name 'softras_test' --multicat --gpu_id=
  • SRN car 2-view: python eval/calc_metrics.py -D /cars -O eval_out/srn_car_2v -F srn --gpu_id= (warning: untested after changes)
  • DTU: python eval/calc_metrics.py -D /rs_dtu_4/DTU -O eval_out/dtu_3v -F dvr --list_name 'new_val' --exclude_dtu_bad --dtu_sort

Adjust -O according to the -O flag of the eval.py command. (Note: Currently this script has an ugly standalone argument parser.) This should print a metric summary like the following

psnr 26.799268696042386
ssim 0.9102204550379002
lpips 0.10784384977842876
WROTE eval_sn64/all_metrics.txt
airplane     psnr: 29.756697 ssim: 0.946906 lpips: 0.084329 n_inst: 809
bench        psnr: 26.351427 ssim: 0.911226 lpips: 0.116299 n_inst: 364
cabinet      psnr: 27.720198 ssim: 0.910426 lpips: 0.104584 n_inst: 315
car          psnr: 27.579590 ssim: 0.942079 lpips: 0.094841 n_inst: 1500
chair        psnr: 23.835303 ssim: 0.857738 lpips: 0.145518 n_inst: 1356
display      psnr: 24.217023 ssim: 0.867284 lpips: 0.129138 n_inst: 219
lamp         psnr: 28.579184 ssim: 0.912794 lpips: 0.113561 n_inst: 464
loudspeaker  psnr: 24.435302 ssim: 0.855195 lpips: 0.140653 n_inst: 324
rifle        psnr: 30.597488 ssim: 0.968040 lpips: 0.065629 n_inst: 475
sofa         psnr: 26.944224 ssim: 0.907861 lpips: 0.116114 n_inst: 635
table        psnr: 25.591960 ssim: 0.898314 lpips: 0.098103 n_inst: 1702
telephone    psnr: 27.128039 ssim: 0.921897 lpips: 0.097074 n_inst: 211
vessel       psnr: 29.180307 ssim: 0.938936 lpips: 0.110670 n_inst: 388
---
total        psnr: 26.799269 ssim: 0.910220 lpips: 0.107844

Training instructions

Training code is in train/ directory, specifically train/train.py.

  • Example for training to DTU: python train/train.py -n dtu_exp -c conf/exp/dtu.conf -D /rs_dtu_4 -V 3 --gpu_id= --resume
  • Example for training to SRN cars, 1 view: python train/train.py -n srn_car_exp -c conf/exp/srn.conf -D /cars --gpu_id= --resume
  • Example for training to ShapeNet multi-object, 2 view: python train/train.py -n multi_obj -c conf/exp/multi_obj.conf -D --gpu_id= --resume

Additional flags

  • --resume to resume from checkpoint, if available. Usually just pass this to be safe.
  • -V to specify number of input views to train with. Default is 1.
    • -V 'numbers separated by space' to use random number of views per batch. This does not work so well in our experience but we use it for SRN experiment.
  • -B batch size of objects, default 4
  • --lr , --epochs
  • --no_bbox_step to specify iteration after which to stop using bounding-box sampling. Set to 0 to disable.

If the checkpoint becomes corrupted for some reason (e.g. if process crashes when saving), a backup is saved to checkpoints/ /pixel_nerf_backup . To avoid having to specify -c, -D each time, edit /expconf.conf and add rows for your expname in the config and datadir sections.

Log files and visualizations

View logfiles with tensorboard --logdir /logs/ . Visualizations are written to /visuals/ / _ _vis.png . They are of the form

  • Top coarse, bottom fine (1 row if fine sample disabled)
  • Left-to-right: input-views, depth, output, alpha.

BibTeX

@misc{yu2020pixelnerf,
      title={pixelNeRF: Neural Radiance Fields from One or Few Images},
      author={Alex Yu and Vickie Ye and Matthew Tancik and Angjoo Kanazawa},
      year={2020},
      eprint={2012.02190},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

Parts of the code were based on from kwea123's NeRF implementation: https://github.com/kwea123/nerf_pl. Some functions are borrowed from DVR https://github.com/autonomousvision/differentiable_volumetric_rendering and PIFu https://github.com/shunsukesaito/PIFu

Comments
  • Question about training category-specific

    Question about training category-specific

    Hello,

    Great work and thanks for releasing the code! I'm a bit confused about the training process, in the paper it is mentioned as "A single model is trained for each object class with 50 random views per object instance, randomly sampling either one or two of the training views to encode", I understand this as you firstly use 50 views per object instance to train the general categorical representation, and for the actual training scene, you use only 1-2 views? I think I didn't quite get this process, some further elaboration will be very helpful.

    Many thanks.

    opened by primecai 5
  • One question about cv2.decomposeProjectionMatrix

    One question about cv2.decomposeProjectionMatrix

    Hi Alex,

    First of all, congrats on the great work!

    I was playing around with your code on DTU dataset a bit, and encountered one question, which might be a bit dumb:

    In this line, you use cv2.decomposeProjectionMatrix(P) to obtain the K, R and t from the projection matrix P. However, when I tried to compose these components back via K @ np.concatenate((R, t[:3]/t[3]), axis=1), I could not obtain P back. After playing around with it a bit, I realize that the translation part is not supposed to be t[:3]/t[3], but -R@(t[:3]/t[3]), and we can get the projection matrix back. However, in your implementation, you are using t[:3]/t[3] for all the rest. I am wondering how could this not lead to issue, since the gen_rays function takes in camera poses as the input.

    Many thanks for your answers in advance!

    Best, Songyou

    opened by pengsongyou 4
  • Results for `srn64_unseen` is weird

    Results for `srn64_unseen` is weird

    https://user-images.githubusercontent.com/49523965/104848234-5d5e7480-5927-11eb-8d8a-fb8af3829acf.mp4

    when run with: python eval/gen_video.py --name sn64_unseen -D /data/private/pixel_nerf_data/NMR_Dataset --gpu_id 0 --split test -P 2 -S 0

    opened by ThisIsIsaac 4
  • Error reading DTU dataset in train/train.py

    Error reading DTU dataset in train/train.py

    On running python train/train.py -n dtu_exp -c conf/exp/dtu.conf -D data/rs_dtu_4 -V 3 --gpu_id=0 --resume onDTU dataset I get the following error.

    EXPERIMENT NAME: dtu_exp CONTINUE? yes

    • Config file: conf/exp/dtu.conf
    • Dataset format: dvr_dtu
    • Dataset location: data/rs_dtu_4 Loading DVR dataset data/rs_dtu_4 stage train 0 objs type: dtu Loading DVR dataset data/rs_dtu_4 stage val 0 objs type: dtu Loading DVR dataset data/rs_dtu_4 stage test 0 objs type: dtu dset z_near 0.1, z_far 5.0, lindisp False Using torchvision resnet34 encoder train dir data <data.data_util.ColorJitterDataset object at 0x1471c5566e80> Traceback (most recent call last): File "train/train.py", line 344, in trainer = PixelNeRFTrainer() File "train/train.py", line 83, in init super().init(net, dset, val_dset, args, conf["train"], device=device) File "/ssd_scratch/cvit/avani.gupta/pixel-nerf/train/trainlib/trainer.py", line 17, in init self.train_data_loader = torch.utils.data.DataLoader( File "/home/avani.gupta/anaconda3/envs/pixelnerf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 224, in init sampler = RandomSampler(dataset, generator=generator) File "/home/avani.gupta/anaconda3/envs/pixelnerf/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 95, in init raise ValueError("num_samples should be a positive integer " ValueError: num_samples should be a positive integer value, but got num_samples=0

    Can you clarify it the structure of rf_dtu_4 folder, it is rf_dtu_4/DTU/ has .lst and scan folders/ If this is correct, can you point out what is possibly causing this error?

    opened by avani17101 2
  • Question about coordinate system of SRNDataset, DVRDataset

    Question about coordinate system of SRNDataset, DVRDataset

    Hi, @sxyu, congrats on your great work in CVPR 2021!

    When I am reading your code, I notice that in the SRNDataset and DVRDataset, there is a coordinate transform step. Specifically, for DTU in DVRDataset, you do:

    if sub_format == "dtu":
        self._coord_trans_world = torch.tensor(
            [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]],
            dtype=torch.float32,
        )
        self._coord_trans_cam = torch.tensor(
            [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]],
            dtype=torch.float32,
        )
    
    pose = (
                    self._coord_trans_world
                    @ torch.tensor(pose, dtype=torch.float32)
                    @ self._coord_trans_cam
                )
    

    for NMR in DVRDataset, you do:

    else:
        self._coord_trans_world = torch.tensor(
            [[1, 0, 0, 0], [0, 0, -1, 0], [0, 1, 0, 0], [0, 0, 0, 1]],
            dtype=torch.float32,
        )
        self._coord_trans_cam = torch.tensor(
            [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]],
            dtype=torch.float32,
        )
    pose = (
                    self._coord_trans_world
                    @ torch.tensor(pose, dtype=torch.float32)
                    @ self._coord_trans_cam
                )
    

    However, for SRN dataset in SRNDataset, you do:

    self._coord_trans = torch.diag(
                torch.tensor([1, -1, -1, 1], dtype=torch.float32)
            )
    pose = pose @ self._coord_trans
    

    First, I understand that your camera coordinate system is x right, y up, z out (to my knowledge, this is the OpenGL style coordinate system). Then for DVR, they use the OpenCV system, where x right, y down, z in. Also, I have checked SRN, they use the OpenCV style camera coordinate system as well.

    Second, after seeing #2, I also understand the reason why there is a coordinate transform step. Because you want to utilize the Rt matrix where OpenCV coordinate system is used, to transform cam_unproj_map to the world space, you have to first transform cam_unproj_map from OpenGL style into OpenCV style by timing self._coord_trans_cam, then you do old camera2world transform pose, finally you time self._coord_trans_world to transform back to OpenGL style system. This is something that I can understand. (Also refer to stackoverflow link)

    However, here are some questions:

    1. Why do you do pose = pose @ self._coord_trans in SRNDataset, while self._coord_trans_world @ torch.tensor(pose, dtype=torch.float32) @ self._coord_trans_cam in DVRDataset?
    2. Why do you use different self._coord_trans_world and self._coord_trans_cam in DVRDataset?
    3. Is there any mistake in the above? Please point it out, thanks~

    Best, Xingyi Li

    opened by xingyi-li 1
  • Question about pixel feature

    Question about pixel feature

    Thank you for releasing the source code. During reading the code, I wonder where the target view is, because there is no RGB loss between predicted target view and ground-truth target view.

    opened by tau-yihouxiang 1
  • Tool for drawing architecture and video

    Tool for drawing architecture and video

    Dear @sxyu ,

    Maybe it is not directly related to pixelnerf but I am attracted by the figure and video. What tools did you use for making this?

    Thanks,

    opened by tmquan 1
  • Question related to input 3D coordinates

    Question related to input 3D coordinates

    Hi guys, I have a quick question. In the DTU dataloader, after the transformation step, the output pose is in camera or world space (cam-to-world or world-to-cam) ?

    I am also a bit confused here as well. It seems like the sampled 3D points are transformed to the camera space of input views. The coordinates xyz of these points are then feed to the MLP. I thought the input 3D coordinates to the MLP should be points in the world space just like NERF paper. Can you clarify this ?

    opened by phongnhhn92 1
  • Question about ResidualBlock

    Question about ResidualBlock

    https://github.com/sxyu/pixel-nerf/blob/ddddb6b8f9dd29972305597c2d8ce31ededa82eb/src/model/resnetfc.py#L55

    It seems like you are doing relu -> conv -> relu -> conv + res for the residual block, which is different from the original one, but more like the identity mapping one. Could you provide some insights as to why (does the original resblock not work well?) Also why no batchnorm?

    opened by versatran01 1
  • Neural Volumes TV regularizer

    Neural Volumes TV regularizer

    The following comment is unclear to me: https://github.com/sxyu/pixel-nerf/blob/ddddb6b8f9dd29972305597c2d8ce31ededa82eb/src/render/nerf.py#L228

    From my understanding of the code, I don't see the TV regularizer from the neural volumes paper around this comment. Is the regularizer used in pixelNeRF? Where is it implemented? Thanks!

    opened by sagniklp 1
  • AttributeError: module 'detectron2' has no attribute 'utils'

    AttributeError: module 'detectron2' has no attribute 'utils'

    when running the real example

    Traceback (most recent call last): File "scripts/preproc.py", line 58, in from detectron2 import model_zoo File "/usr/local/lib/python3.6/dist-packages/detectron2/model_zoo/init.py", line 7, in from .model_zoo import get, get_config_file, get_checkpoint_url File "/usr/local/lib/python3.6/dist-packages/detectron2/model_zoo/model_zoo.py", line 6, in from detectron2.checkpoint import DetectionCheckpointer File "/usr/local/lib/python3.6/dist-packages/detectron2/checkpoint/init.py", line 7, in from .detection_checkpoint import DetectionCheckpointer File "/usr/local/lib/python3.6/dist-packages/detectron2/checkpoint/detection_checkpoint.py", line 5, in import detectron2.utils.comm as comm AttributeError: module 'detectron2' has no attribute 'utils'

    opened by ak9250 1
  • No such file or directory: 'eval_out/srn_car_1v/1079efee042629d4ce28f0f1b509eda/metrics.txt'

    No such file or directory: 'eval_out/srn_car_1v/1079efee042629d4ce28f0f1b509eda/metrics.txt'

    Hello, the author. First, I use your python eval/eval py -D <srn_ data>/cars -n srn_ car -P '64' -O eval_ out/srn_ car_ 1v to eval the SRN dataset. As you said, running this script takes a lot of time, so I stopped running after eval. Then the following files appeared in the file. After that, I tried to run python eval/calc_ metrics. py -D <srn data dir>/cars -O eval_ out/srn_ car_ 1v -F srn --gpu_ Id=<GPU>A result is generated and an error is reported: No such file or directory:'eval_ out/srn_ car_ 1v/1079efee042629d4ce28f0f1b509eda/metrics.txt' In fact, check eval_ The metrics.txt folder does not exist in the out folder. I want to know what the problem is?

    问题1 问题2

    opened by WesternTrail 0
  • Running time increase of PixelNeRFNet forward()?

    Running time increase of PixelNeRFNet forward()?

    During rendering, in the "composite()" in nerf.py, sample points will be split in to multiple batches, then run the PixelNeRFNet's forward() multiple times.

    However, during tests, I found the running time of PixelNeRFNet forward() will increase during loops. For example, in 10st loops, forward() just cost about 0.0015s. While in following loops, running time of forward() increase abruptly and cost about 0.18s. This problem makes the rendering very slow. So, how can we avoid this undesirable increase?

    Looking forward to your reply, thanks!

    opened by ty625911724 0
  • How do I evaluate multiple input images of the same car?

    How do I evaluate multiple input images of the same car?

    Hi, what an excellent work! I note that eval_real.py uses only one input image and I am wondering that How can I evaluate multiple input images of the same object as your paper did. Looking forward to your reply, thanks!

    opened by zhengky6 0
  • Why use normalize_z?

    Why use normalize_z?

    Thanks for your wonderful work, I have some doubts about coordinate conversion. Why only rotate without translation when transforming to a reference coordinate system? in src/model/models.py image image

    opened by Anqw 2
  • Bad image error when performing full eval on SRN cars.

    Bad image error when performing full eval on SRN cars.

    Hi,

    I encountered the bad image error when running the full evaluation on SRN cars, i.e., 876d92ce6a0e4bf399588eee976baae is composed of white images with no objects. I wonder how did you process it? Should I just skip it?

    Thx

    opened by pansanity666 1
Owner
Alex Yu
Researcher at UC Berkeley
Alex Yu
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Google 625 Dec 30, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

null 524 Jan 8, 2023
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF ?? : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 2, 2023
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 8, 2023
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 2, 2023
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

null 6.5k Jan 1, 2023
A PyTorch re-implementation of Neural Radiance Fields

nerf-pytorch A PyTorch re-implementation Project | Video | Paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall

Krishna Murthy 709 Jan 9, 2023
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

null 331 Dec 28, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022