Unofficial pytorch-lightning implement of Mip-NeRF

Overview

mipnerf_pl

Unofficial pytorch-lightning implement of Mip-NeRF, Here are some results generated by this repository (pre-trained models are provided below):

Multi-scale render result

Multi Scale Train And Multi Scale Test Single Scale
PNSR SSIM PSNR SSIM
Full Res 1/2 Res 1/4 Res 1/8 Res Aveage
(PyTorch)
Aveage
(Jax)
Full Res 1/2 Res 1/4 Res 1/8 Res Average
(PyTorch)
Average
(Jax)
Full Res
lego 34.412 35.640 36.074 35.482 35.402 35.736 0.9719 0.9843 0.9897 0.9912 0.9843 0.9843 35.198 0.985

The top image of each column is groundtruth and the bottom image is Mip-NeRF render in different resolutions.

The above results are trained on the lego dataset with 300k steps for single-scale and multi-scale datasets respectively, and the pre-trained model can be found here. Feel free to contribute more datasets.

Installation

We recommend using Anaconda to set up the environment. Run the following commands:

# Clone the repo
git clone https://github.com/hjxwhy/mipnerf_pl.git; cd mipnerf_pl
# Create a conda environment
conda create --name mipnerf python=3.9.12; conda activate mipnerf
# Prepare pip
conda install pip; pip install --upgrade pip
# Install PyTorch
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
# Install requirements
pip install -r requirements.txt

Dataset

Download the datasets from the NeRF official Google Drive and unzip nerf_synthetic.zip. You can generate the multi-scale dataset used in the paper with the following command:

# Generate all scenes
python datasets/convert_blender_data.py --blenderdir UZIP_DATA_DIR --outdir OUT_DATA_DIR
# If you only want to generate a scene, you can:
python datasets/convert_blender_data.py --blenderdir UZIP_DATA_DIR --outdir OUT_DATA_DIR --object_name lego

Running

Train

To train a single-scale lego Mip-NeRF:

# You can specify the GPU numbers and batch size at the end of command,
# such as num_gpus 2 train.batch_size 4096 val.batch_size 8192 and so on.
# More parameters can be found in the configs/lego.yaml file. 
python train.py --out_dir OUT_DIR --data_path UZIP_DATA_DIR --dataset_name blender exp_name EXP_NAME

To train a multi-scale lego Mip-NeRF:

python train.py --out_dir OUT_DIR --data_path OUT_DATA_DIR --dataset_name multi_blender exp_name EXP_NAME

Evaluation

You can evaluate both single-scale and multi-scale models under the eval.sh guidance, changing all directories to your directory. Alternatively, you can use the following command for evaluation.

# eval single scale model
python eval.py --ckpt CKPT_PATH --out_dir OUT_DIR --scale 1 --save_image
# eval multi scale model
python eval.py --ckpt CKPT_PATH --out_dir OUT_DIR --scale 4 --save_image
# summarize the result again if you have saved the pnsr.txt and ssim.txt
python eval.py --ckpt CKPT_PATH --out_dir OUT_DIR --scale 4 --summa_only

Render Spheric Path Video

It also provide a script for rendering spheric path video

# Render spheric video
python render_video.py --ckpt CKPT_PATH --out_dir OUT_DIR --scale 4
# generate video if you already have images
python render_video.py --gen_video_only --render_images_dir IMG_DIR_RENDER

Visualize All Poses

The script modified from nerfplusplus supports visualize all poses which have been reorganized to right-down-forward coordinate. Multi-scale have different camera focal length which is equivalent to different resolutions.

Citation

Kudos to the authors for their amazing results:

@misc{barron2021mipnerf,
      title={Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields},
      author={Jonathan T. Barron and Ben Mildenhall and Matthew Tancik and Peter Hedman and Ricardo Martin-Brualla and Pratul P. Srinivasan},
      year={2021},
      eprint={2103.13415},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

Thansks to mipnerf, mipnerf-pytorch, nerfplusplus, nerf_pl

Comments
  • Experiment Result

    Experiment Result

    Hey and Thanks for sharing ur mip-nerf code. I am currently doing something base on ur code. I wonder whether this code on the same experiment of paper? See your reply! love u

    opened by jamesgzl 13
  • Setting batch_type to 'single_image'

    Setting batch_type to 'single_image'

    Hi,

    I'm trying to train a model with the batch_type set to 'single_image', but it seems like this part hasn't been implemented so far. I tried to set shuffle in dataloader to be False, so that each batch has rays only coming from one image. However, it doesn't seem to work as the novel view are just white images.

    Any clue what might be a right implementation for 'single_image' mode? Appreciate any input you might have.

    opened by joanncqn 7
  • OSError: /usr/local/lib/python3.7/dist-packages/torchtext/lib/libtorchtext.so: undefined symbol: _ZN3c1022getCustomClassTypeImplERKSt10type_index [ ] Colab paid products - Cancel contracts here

    OSError: /usr/local/lib/python3.7/dist-packages/torchtext/lib/libtorchtext.so: undefined symbol: _ZN3c1022getCustomClassTypeImplERKSt10type_index [ ] Colab paid products - Cancel contracts here

    Hi, I am getting the following error in train, any idea? Thanks!

    OSError: /usr/local/lib/python3.7/dist-packages/torchtext/lib/libtorchtext.so: undefined symbol: _ZN3c1022getCustomClassTypeImplERKSt10type_index

    opened by Miriam2040 3
  • FileNotFoundError

    FileNotFoundError

    I am so sorry to disturb you,when I save_image,I have a problem 0%| | 0/200 [00:02<?, ?it/s] Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/mipnerf_pl-dev/eval.py", line 90, in blender_scenes = main(args) File "/root/mipnerf_pl-dev/eval.py", line 78, in main save_images(fine_rgb, distances, accs, out_path, n) File "/root/mipnerf_pl-dev/utils/vis.py", line 70, in save_images save_image_tensor(rgb, H, W, os.path.join(path, str('{:05d}'.format(idx)) + '_rgb' + '.png')) File "/root/mipnerf_pl-dev/utils/vis.py", line 56, in save_image_tensor torchvision.utils.save_image(image, save_path) File "/root/miniconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/torchvision/utils.py", line 156, in save_image im.save(fp, format=format) File "/root/miniconda3/lib/python3.8/site-packages/PIL/Image.py", line 2297, in save fp = builtins.open(filename, "w+b") FileNotFoundError: [Errno 2] No such file or directory: '/root/mipnerf_pl-dev/Viedos/000/test/lego/4/00000_rgb.png' I don't know how to solve the problem

    opened by XiangFeng66 2
  • Value error

    Value error

    when I train the model , I have some problems Validation sanity check: 0it [00:00, ?it/s]/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:110: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 128 which is the number of cpus on this machine) in theDataLoaderinit to improve performance. rank_zero_warn( Traceback (most recent call last): File "train.py", line 68, in <module> main(parse_args(parser)) File "train.py", line 64, in main trainer.fit(system, ckpt_path=hparams['checkpoint.resume_path']) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 737, in fit self._call_and_handle_interrupt( File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 772, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run self._dispatch() File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1274, in _dispatch self.training_type_plugin.start_training(self) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training self._results = trainer.run_stage() File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1284, in run_stage return self._run_train() File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1314, in _run_train self.fit_loop.run() File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 140, in run self.on_run_start(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 197, in on_run_start self.trainer.reset_train_val_dataloaders(self.trainer.lightning_module) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 561, in reset_train_val_dataloaders self.reset_train_dataloader(model=model) File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 386, in reset_train_dataloader raise ValueError( ValueError:val_check_interval(10000) must be less than or equal to the number of the training batches (1303). If you want to disable validation setlimit_val_batches` to 0.0 instead.

    what can I do to solve it

    opened by XiangFeng66 2
  • A question about environment

    A question about environment

    I followed the steps in readme file, and when I did the last step(pip install -r requirements.txt), the system automatically reinstalled torch during the installation process, but there were already separate steps to install torch and torch vision("pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113" of readme file). image Also, at the end, the system tried uninstalling torch-1.12.1 and cu113, and after uninstalling them, it said that the torch version was incompatible. As shown in the figure below image

    opened by ZYY-666 2
  • question about dataset

    question about dataset

    Hello, thanks for your excellent work, I have noticed the Jax version and your implementation of mip-nerf have only test on blender dataset, and do not test on other datasets, like llff dataset. Is this implementation only works for blender dataset? can you do some experiments on llff dataset?

    opened by menglongyue 1
  •   How do you understand the variance part of Formula 8 in the paper?

    How do you understand the variance part of Formula 8 in the paper?

    (sigma_t^2 and sigma_r^2) in formula 8(2) represent the decomposition of the vector. In the second coefficient, the unit matrix represents the vector itself, and the one that is subtracted represents the projection from the conical frustum coordinate system to the world coordinate system in the direction of the axis, so the second term can only represent the variance perpendicular to the direction of the axis. So, why are the coefficients of the first term and the subtractive coefficients of the second term different in form? Whether the first term should also be divided by the quadratic power of d?

    opened by wqx854987945 0
  • Eval error: ValueError: too many values to unpack (expected 3)

    Eval error: ValueError: too many values to unpack (expected 3)

    Traceback (most recent call last):                                                          
      File "/data/aug_code/mipnerf_pl/eval.py", line 90, in <module>                                     
        blender_scenes = main(args)                                                                         
      File "/data/aug_code/mipnerf_pl/eval.py", line 61, in main                                                                                                            
        _, (f_rgb, distance, acc) = model(batch_rays, False, args.white_bkgd)                                                                                                     
    ValueError: too many values to unpack (expected 3) 
    
    

    The Ln62 of eval.py should be changed to: _, (f_rgb, distance, acc, _, _) = model(batch_rays, False, args.white_bkgd)

    opened by sjtuytc 3
  • Collaboration

    Collaboration

    Hello, thanks for sharing this implementation, it is awesome.

    I am also implementing this and "mip-nerf 360" into my codebase. Would you like to work on it together?

    I had a look at your latest commit and actually this

    far_inv = 1 / far
    near_inv = 1 / near
    t_samples = far_inv * t_samples + (1 - t_samples) * near_inv
    t_samples = 1 / t_samples
    

    equals to this t_samples = 1. / (1. / near * (1. - t_samples) + 1. / far * t_samples)

    opened by theFilipko 8
Owner
Jianxin Huang
Jianxin Huang
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

null 94 Nov 22, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
A simple, unofficial implementation of MAE using pytorch-lightning

Masked Autoencoders in PyTorch A simple, unofficial implementation of MAE (Masked Autoencoders are Scalable Vision Learners) using pytorch-lightning.

Connor Anderson 20 Dec 3, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ([email protected]), Tiezheng Wang (wtz920729

null 7 Jan 3, 2023
Unofficial Implement PU-Transformer

PU-Transformer-pytorch Pytorch unofficial implementation of PU-Transformer (PU-Transformer: Point Cloud Upsampling Transformer) https://arxiv.org/abs/

Lee Hyung Jun 7 Sep 21, 2022
NeRF Meta-Learning with PyTorch

NeRF Meta Learning With PyTorch nerf-meta is a PyTorch re-implementation of NeRF experiments from the paper "Learned Initializations for Optimizing Co

Sanowar Raihan 78 Dec 18, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 8, 2023
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

null 294 Jan 1, 2023
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
PlenOctrees: NeRF-SH Training & Conversion

PlenOctrees Official Repo: NeRF-SH training and conversion This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting

Alex Yu 323 Dec 29, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

null 524 Jan 8, 2023