PyTorch NeRF and pixelNeRF
This repository contains minimal PyTorch implementations of the NeRF model described in "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" and the pixelNeRF model described in "pixelNeRF: Neural Radiance Fields from One or Few Images". While there are other PyTorch implementations out there (e.g., this one and this one for NeRF, and the authors' official implementation for pixelNeRF), I personally found them somewhat difficult to follow, so I decided to do a complete rewrite of NeRF myself. I tried to stay as close to the authors' text as possible, and I added comments in the code referring back to the relevant sections/equations in the paper. The final result is a tight 357 lines of heavily commented code (303 sloc—"source lines of code"—on GitHub) all contained in a single file. For comparison, this PyTorch implementation has approximately 970 sloc spread across several files, while this PyTorch implementation has approximately 905 sloc.
run_tiny_nerf.py
trains a simplified NeRF model inspired by the "Tiny NeRF" example provided by the NeRF authors. This NeRF model does not use fine sampling and the MLP is smaller, but the code is otherwise identical to the full model code. At only 155 sloc, it might be a good place to start for people who are completely new to NeRF. If you prefer your code more object-oriented, check out run_nerf_alt.py
and run_tiny_nerf_alt.py
.
A Colab notebook for the full model can be found here, while a notebook for the tiny model can be found here. The generate_nerf_dataset.py
script was used to generate the training data of the ShapeNet car.
For the following test view:
run_nerf.py
generated the following after 20,100 iterations (a few hours on a P100 GPU):
Loss: 0.00022201683896128088
while run_tiny_nerf.py
generated the following after 19,600 iterations (~35 minutes on a P100 GPU):
Loss: 0.0004151524917688221
The advantages of streamlining NeRF's code become readily apparent when trying to extend NeRF. For example, training a pixelNeRF model only required making a few changes to run_nerf.py
bringing it to 370 sloc (notebook here). For comparison, the official pixelNeRF implementation has approximately 1,300 pixelNeRF-specific (i.e., not related to the image encoder or dataset) sloc spread across several files. The generate_pixelnerf_dataset.py
script was used to generate the training data of ShapeNet cars.
For the following source object and view:
and target view:
run_pixelnerf.py
generated the following after 73,243 iterations (~12 hours on a P100 GPU; the full pixelNeRF model was trained for 400,000 iterations, which took six days):
Loss: 0.004468636587262154
The "smearing" is an artifact caused by the bounding box sampling method.
Similarly, training an "object-centric NeRF" (i.e., where the object is rotated instead of the camera) is identical to run_tiny_nerf.py
(notebook here). Rotating an object is equivalent to holding the object stationary and rotating both the camera and the lighting in the opposite direction, which is how the object-centric dataset is generated in generate_obj_nerf_dataset.py
.
For the following test view:
run_tiny_obj_nerf.py
generated the following after 19,400 iterations (~35 minutes on a P100 GPU):
Loss: 0.0005469498573802412