Plenoxels: Radiance Fields without Neural Networks, Code release WIP

Related tags

Deep Learning svox2
Overview

Plenoxels: Radiance Fields without Neural Networks

Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa

UC Berkeley

Website and video: https://alexyu.net/plenoxels

arXiv: https://arxiv.org/abs/2112.05131

Note: This is a preliminary release. We have not carefully tested everything, but feel that it would be better to first put the code out there.

Also, despite the name, it's not strictly intended to be a successor of svox

Citation:

@misc{yu2021plenoxels,
      title={Plenoxels: Radiance Fields without Neural Networks}, 
      author={{Alex Yu and Sara Fridovich-Keil} and Matthew Tancik and Qinhong Chen and Benjamin Recht and Angjoo Kanazawa},
      year={2021},
      eprint={2112.05131},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This contains the official optimization code. A JAX implementation is also available at https://github.com/sarafridov/plenoxels. However, note that the JAX version is currently feature-limited, running in about 1 hour per epoch and only supporting bounded scenes (at present).

Fast optimization

Overview

Setup

First create the virtualenv; we recommend using conda:

conda env create -f environment.yml
conda activate plenoxel

Then clone the repo and install the library at the root (svox2), which includes a CUDA extension.

If your CUDA toolkit is older than 11, then you will need to install CUB as follows: conda install -c bottler nvidiacub. Since CUDA 11, CUB is shipped with the toolkit.

To install the main library, simply run

pip install .

In the repo root directory.

Getting datasets

We have backends for NeRF-Blender, LLFF, NSVF, and CO3D dataset formats, and the dataset will be auto-detected. Please get the NeRF-synthetic and LLFF datasets from:

https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1

We provide a processed Tanks and temples dataset (with background) in NSVF format at: https://drive.google.com/file/d/1PD4oTP4F8jTtpjd_AQjCsL4h8iYFCyvO/view?usp=sharing

Note this data should be identical to that in NeRF++

Voxel Optimization (aka Training)

For training a single scene, see opt/opt.py. The launch script makes this easier.

Inside opt/, run ./launch.sh <exp_name> <GPU_id> <data_dir> -c <config>

Where <config> should be configs/syn.json for NeRF-synthetic scenes, configs/llff.json for forward-facing scenes, and configs/tnt.json for tanks and temples scenes, for example.

The dataset format will be auto-detected from data_dir. Checkpoints will be in ckpt/exp_name.

Evaluation

Use opt/render_imgs.py

Usage, (in opt/) python render_imgs.py <CHECKPOINT.npz> <data_dir>

By default this saves all frames, which is very slow. Add --no_imsave to avoid this.

Rendering a spiral

Use opt/render_imgs_circle.py

Usage, (in opt/) python render_imgs_circle.py <CHECKPOINT.npz> <data_dir>

Parallel task executor

We provide a parallel task executor based on the task manager from PlenOctrees to automatically schedule many tasks across sets of scenes or hyperparameters. This is used for evaluation, ablations, and hypertuning See opt/autotune.py. Configs in opt/tasks/*.json

For example, to automatically train and eval all synthetic scenes: you will need to change train_root and data_root in tasks/eval.json, then run:

python autotune.py -g '<space delimited GPU ids>' tasks/eval.json

For forward-facing scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_ff.json

For Tanks and Temples scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_tnt.json

Using a custom image set

First make sure you have colmap installed. Then

(in opt/) bash scripts/proc_colmap.sh <img_dir>

Where <img_dir> should be a directory directly containing png/jpg images from a normal perspective camera. For custom datasets we adopt a data format similar to that in NSVF https://github.com/facebookresearch/NSVF

You should be able to use this dataset directly afterwards. The format will be auto-detected.

To view the data use: python scripts/view_data.py <img_dir>

This should launch a server at localhost:8889

You may need to tune the TV. For forward-facing scenes, often making the TV weights 10x higher is helpful (configs/llff_hitv.json). For the real lego scene I used the config configs/custom.json.

Random tip: how to make pip install faster for native extensions

You may notice that this CUDA extension takes forever to install. A suggestion is using ninja. On Ubuntu, install it with sudo apt install ninja-build. Then set the environment variable MAX_JOBS to the number of CPUS to use in parallel (e.g. 12) in your shell startup script. This will enable parallel compilation and significantly improve iteration speed.

Comments
  • Installation failed

    Installation failed

    Hi,

    I got the following error when running pip install .:

    Building wheels for collected packages: svox2
      Building wheel for svox2 (setup.py) ... error
      ERROR: Command errored out with exit status 1:
       command: /home/anaconda3/envs/plenoxel/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-1yhno3f6/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-1yhno3f6/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-eoq4wyid
           cwd: /tmp/pip-req-build-1yhno3f6/
      Complete output (176 lines):
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/setuptools/dist.py:717: UserWarning: Usage of dash-separated 'index-url' will not be supported in future versions. Please use the underscore name 'index_url' instead
        warnings.warn(
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/setuptools/dist.py:717: UserWarning: Usage of dash-separated 'index-url' will not be supported in future versions. Please use the underscore name 'index_url' instead
        warnings.warn(
      running bdist_wheel
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
      running build
      running build_py
      package init file 'svox2/csrc/__init__.py' not found (or not a regular file)
      running build_ext
      building 'svox2.csrc' extension
      gcc -pthread -B /home/anaconda3/envs/plenoxel/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-1yhno3f6/svox2/csrc/include -I/usr/local/cuda-11.3 -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/anaconda3/envs/plenoxel/include/python3.8 -c svox2/csrc/svox2.cpp -o build/temp.linux-x86_64-3.8/svox2/csrc/svox2.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
      cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
      /usr/local/cuda-11.3/bin/nvcc -I/tmp/pip-req-build-1yhno3f6/svox2/csrc/include -I/usr/local/cuda-11.3 -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/anaconda3/envs/plenoxel/include/python3.8 -c svox2/csrc/svox2_kernel.cu -o build/temp.linux-x86_64-3.8/svox2/csrc/svox2_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 -std=c++14
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&) const [with Derived = torch::nn::CrossMapLRN2dImpl]’:
      /tmp/tmpxft_00002579_00000000-6_svox2_kernel.cudafe1.stub.c:4:27:   required from here
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >’ to type ‘torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&) const [with Derived = torch::nn::EmbeddingBagImpl]’:
      /tmp/tmpxft_00002579_00000000-6_svox2_kernel.cudafe1.stub.c:4:27:   required from here
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >’ to type ‘torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&) const [with Derived = torch::nn::EmbeddingImpl]’:
      /tmp/tmpxft_00002579_00000000-6_svox2_kernel.cudafe1.stub.c:4:27:   required from here
      ......
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >’ to type ‘torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&) const [with Derived = torch::nn::UnflattenImpl]’:
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48:   required from here
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >’ to type ‘torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&) const [with Derived = torch::nn::LinearImpl]’:
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48:   required from here
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>&’
      /home/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >’ to type ‘torch::OrderedDict<std::basic_string<char>, std::shared_ptr<torch::nn::Module> >&’
      error: command '/usr/local/cuda-11.3/bin/nvcc' failed with exit status 1
      ----------------------------------------
      ERROR: Failed building wheel for svox2
    

    The error information then repeated itself after a line of Running setup.py install for svox2 ... error. I removed some error log in the middle for simplicity as they basically just repeat error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string<char>, at::Tensor>

    My system is Centos 7. CUDA version is 11.3. cuDNN version is 8.2.0. Driver Version is 465.19.01. GPU is a Tesla V100. The conda environment is created using the environment.yml.

    Any help would be much appreciated.

    Thank you.

    opened by ParusMajor60 5
  • install failed, please help

    install failed, please help

    (plenoxel) root@6ba5ad9d785b:~/svox2# pip install . Processing /root/svox2 DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555. Building wheels for collected packages: svox2 Building wheel for svox2 (setup.py) ... error ERROR: Command errored out with exit status 1: command: /root/miniconda/envs/plenoxel/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"'; file='"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-mrnyst1c cwd: /tmp/pip-req-build-1y564l6b/ Complete output (65 lines): No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/svox2 copying svox2/init.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/defs.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/svox2.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/utils.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/version.py -> build/lib.linux-x86_64-3.8/svox2 package init file 'svox2/csrc/init.py' not found (or not a regular file) running build_ext /root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py:782: UserWarning: The detected CUDA version (11.4) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem. warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda)) building 'svox2.csrc' extension creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8 creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8/svox2 creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8/svox2/csrc Traceback (most recent call last): File "", line 1, in File "/tmp/pip-req-build-1y564l6b/setup.py", line 55, in setup( File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run self.run_command('build') File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions build_ext.build_extensions(self) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 551, in unix_wrap_ninja_compile cuda_post_cflags = unix_cuda_flags(cuda_post_cflags) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 450, in unix_cuda_flags cflags + _get_cuda_arch_flags(cflags)) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1606, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range

    ERROR: Failed building wheel for svox2 Running setup.py clean for svox2 Failed to build svox2 Installing collected packages: svox2 Running setup.py install for svox2 ... error ERROR: Command errored out with exit status 1: command: /root/miniconda/envs/plenoxel/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"'; file='"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-uibjp4fh/install-record.txt --single-version-externally-managed --compile --install-headers /root/miniconda/envs/plenoxel/include/python3.8/svox2 cwd: /tmp/pip-req-build-1y564l6b/ Complete output (67 lines): No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/svox2 copying svox2/init.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/defs.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/svox2.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/utils.py -> build/lib.linux-x86_64-3.8/svox2 copying svox2/version.py -> build/lib.linux-x86_64-3.8/svox2 package init file 'svox2/csrc/init.py' not found (or not a regular file) running build_ext /root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py:782: UserWarning: The detected CUDA version (11.4) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem. warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda)) building 'svox2.csrc' extension creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8 creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8/svox2 creating /tmp/pip-req-build-1y564l6b/build/temp.linux-x86_64-3.8/svox2/csrc Traceback (most recent call last): File "", line 1, in File "/tmp/pip-req-build-1y564l6b/setup.py", line 55, in setup( File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/install.py", line 545, in run self.run_command('build') File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions build_ext.build_extensions(self) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/root/miniconda/envs/plenoxel/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 551, in unix_wrap_ninja_compile cuda_post_cflags = unix_cuda_flags(cuda_post_cflags) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 450, in unix_cuda_flags cflags + _get_cuda_arch_flags(cflags)) File "/root/miniconda/envs/plenoxel/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1606, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range ---------------------------------------- ERROR: Command errored out with exit status 1: /root/miniconda/envs/plenoxel/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"'; file='"'"'/tmp/pip-req-build-1y564l6b/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-uibjp4fh/install-record.txt --single-version-externally-managed --compile --install-headers /root/miniconda/envs/plenoxel/include/python3.8/svox2 Check the logs for full command output.

    opened by weiforce 5
  • Study of voxel opacity? (density)

    Study of voxel opacity? (density)

    The paper focuses on visual reconstruction metrics... has there been any study of depth / voxel opacity yet? (e.g. on Tanks and Temples?) I could be wrong, but it looks like there is not yet any affordance for returning / extracting / debugging opacity from the CUDA impl:

    https://github.com/sxyu/svox2/blob/ad1b4a816f7c2a6875880200e708f58f67707e5f/svox2/csrc/render_lerp_kernel_cuvol.cu#L74

    So perhaps there has been no such study yet.

    Dumb question: do the pytorch checkpoints still work if one "trains" using the CUDA kernels but "tests" using the pytorch _volume_render_gradcheck* code paths? (and how synchronized are the pytorch codepaths with the CUDA kernels)?

    Thanks for the amazing paper and for releasing the results so quickly!

    opened by pwais 5
  • LLFF scene visualization

    LLFF scene visualization

    How to visualize the LLFF scene (forward-facing scene) like the other nerf-like visualization?

    I tried to modify the scale factor, render radius, and the other factors similar to nerf-code, but it did not work well. Specifically, I guess there are some problems on render poses for spiral view.

    Here are my visualization sample for forward-facing scene, which is formatted like LLFF dataset

    image

    visualization of previous nerf-based code is as follows

    image

    opened by dogyoonlee 4
  • Fail to reproduce the results (bad visualization + low PSNR) (Fixed)

    Fail to reproduce the results (bad visualization + low PSNR) (Fixed)

    Thanks for the amazing work. When I run the code on nerf_synthetic data (i.e., Lego), I couldn't get the same results as good as shown in the paper. I am showing a rendered LEGO result as below (right): 0000 The PSNR I got is ~21 on lego. I believe I followed the instructions for the installation and running. The example command line is like:

    python opt.py ../data/lego -t ckpt/lego -c configs/syn.json

    I'm also repeating the experiments on other data. The Drum and the hotdog in Syn data and the fern in LLFF data seem fine. No idea about other data just yet. Some good examples I got: image image image

    opened by KelestZ 4
  • own image result bad

    own image result bad

    When I use render_imgs_circle.py the result is poor, but the psnr value is high(PSNR:27). I speculate that it is a problem of downsampling If I modify the downsampling number to 8, what do I need to change image

    But use render_imgs.py the result is well

    image

    opened by SSground 4
  • How to use 2 GPUs to train?

    How to use 2 GPUs to train?

    I just set the <GPU_id> as "0,1", but it seems just gpu0 was used. ./launch.sh <exp_name> <GPU_id> <data_dir> -c

    When I set the <GPU_id> as "0", I got

    Traceback (most recent call last):
      File "opt.py", line 605, in <module>
        train_step()
      File "opt.py", line 595, in train_step
        grid.optim_sh_step(lr_sh, beta=args.rms_beta, optim=args.sh_optim)
      File "/home/mcy/.local/lib/python3.8/site-packages/svox2/svox2.py", line 2024, in optim_sh_step
        self.sh_rms = torch.zeros_like(self.sh_data.data) # FIXME init?
    RuntimeError: CUDA out of memory. Tried to allocate 1.53 GiB (GPU 0; 15.78 GiB total capacity; 10.66 GiB already allocated; 1.13 GiB free; 13.49 GiB reserved in total by PyTorch)
    
    

    And when I set <GPU_id> "0,1" here (try to use 2 gpus) : ./launch.sh <exp_name> <GPU_id> <data_dir> -c <config> It seems that only one gpu was used.

    Traceback (most recent call last):
      File "opt.py", line 605, in <module>
        train_step()
      File "opt.py", line 595, in train_step
        grid.optim_sh_step(lr_sh, beta=args.rms_beta, optim=args.sh_optim)
      File "/home/mcy/.local/lib/python3.8/site-packages/svox2/svox2.py", line 2024, in optim_sh_step
        self.sh_rms = torch.zeros_like(self.sh_data.data) # FIXME init?
    RuntimeError: CUDA out of memory. Tried to allocate 1.55 GiB (GPU 0; 15.78 GiB total capacity; 10.69 GiB already allocated; 1.12 GiB free; 13.50 GiB reserved in total by PyTorch)
    
    

    I want to know what is the correct way to use two gpus in this work. Thanks for your help~

    opened by Miles629 3
  • Error during Voxel Optimization: ‘NoneType’ object has no attribute ‘__dict__’

    Error during Voxel Optimization: ‘NoneType’ object has no attribute ‘__dict__’

    Your work is amazing!!! But I’m having some truble to reproduce the result. I installed svox2 using pip install . as said in the ReadMe.

    (plenoxel) nerf2themoon@pop-os:~/plenoxel/svox2$ pip install .
    Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
    Processing /home/nerf2themoon/plenoxel/svox2
      DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
       pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
    Building wheels for collected packages: svox2
      Building wheel for svox2 (setup.py) ... done
      Created wheel for svox2: filename=svox2-0.0.1.dev0+sphtexcub.lincolor.fast-cp38-cp38-linux_x86_64.whl size=3452740 sha256=afa3baeab9cee4640d23f94966a0de8aff7fd5a96b306974b6cdbbea4edae146
      Stored in directory: /tmp/pip-ephem-wheel-cache-eoqumo0a/wheels/57/94/9f/8f7f818790c817e36f6a19ba5a5ee388feb9f3ff8b5cb880a9
    Successfully built svox2
    Installing collected packages: svox2
      Attempting uninstall: svox2
        Found existing installation: svox2 0.0.1.dev0+sphtexcub.lincolor.fast
        Uninstalling svox2-0.0.1.dev0+sphtexcub.lincolor.fast:
          Successfully uninstalled svox2-0.0.1.dev0+sphtexcub.lincolor.fast
    Successfully installed svox2-0.0.1.dev0+sphtexcub.lincolor.fast
    

    but it crashed when excuting lauch.sh, emitting logs below:

    Launching experiment fern
    GPU 0
    EXTRA /home/nerf2themoon/plenoxel/svox2/data/nerf_llff_data/fern -c configs/llff.json
    CKPT ckpt/fern
    LOGFILE ckpt/fern/log
    /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2/utils.py:39: UserWarning: CUDA extension svox2.csrc could not be loaded! Operations will be slow.
    Please do not import svox in the svox2 source directory.
      warn("CUDA extension svox2.csrc could not be loaded! " +
    Detected LLFF dataset
    Using pre-scaled images from /home/nerf2themoon/plenoxel/svox2/data/nerf_llff_data/fern/images_4
    Loaded LLFF data /home/nerf2themoon/plenoxel/svox2/data/nerf_llff_data/fern 16.985296178676084 80.00209740336334
    recentered (3, 4)
    [1.3765233 5.48648  ]
    Overriding offset 250-> 250
    dmin = 1.376523, dmax = 5.486480, invz = 0, offset = 250
    100%|█████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 26.84it/s]
    z_bounds from LLFF: [1.3765232563018799, 5.486480236053467] (not used)
    scene_radius [1.496031746031746, 1.6613756613756614, 1.0]
     Generating rays, scaling factor 1
    /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/TensorShape.cpp:2157.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    Detected LLFF dataset
    Using pre-scaled images from /home/nerf2themoon/plenoxel/svox2/data/nerf_llff_data/fern/images_4
    Loaded LLFF data /home/nerf2themoon/plenoxel/svox2/data/nerf_llff_data/fern 16.985296178676084 80.00209740336334
    recentered (3, 4)
    [1.3765233 5.48648  ]
    Overriding offset 250-> 250
    dmin = 1.376523, dmax = 5.486480, invz = 0, offset = 250
    100%|███████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 28.04it/s]
    100%|███████████████████████████████████████████████████| 120/120 [00:00<00:00, 153309.92it/s]
    z_bounds from LLFF: [1.3765232563018799, 5.486480236053467] (not used)
    scene_radius [1.496031746031746, 1.6613756613756614, 1.0]
    Morton code requires a cube grid of power-of-2 size, ignoring...
    Render options RenderOptions(backend='cuvol', background_brightness=0.5, step_size=0.5, sigma_thresh=1e-08, stop_thresh=1e-07, last_sample_opaque=False, near_clip=0.0, use_spheric_clip=False, random_sigma_std=0.0, random_sigma_std_background=0.0)
     Selecting random rays
    Eval step
      0%|                                                                   | 0/3 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "opt.py", line 471, in <module>
        eval_step()
      File "opt.py", line 406, in eval_step
        rgb_pred_test = grid.volume_render_image(cam, use_kernel=True)
      File "/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2/svox2.py", line 1159, in volume_render_image
        if self.basis_type != BASIS_TYPE_MLP and imrend_fn_name in _C.__dict__ and not torch.is_grad_enabled() and not return_raylen:
    AttributeError: 'NoneType' object has no attribute '__dict__'
    DETACH
    

    Also, I tried install it using setup.py, here’s the output:

    setup.py:25: UserWarning: The environment variable `CUB_HOME` was not found.Installation will fail if your system CUDA toolkit version is less than 11.NVIDIA CUB can be downloaded from `https://github.com/NVIDIA/cub/releases`. You can unpack it to a location of your choice and set the environment variable `CUB_HOME` to the folder containing the `CMakeListst.txt` file.
      warnings.warn(
    running install
    running bdist_egg
    running egg_info
    creating svox2.egg-info
    writing svox2.egg-info/PKG-INFO
    writing dependency_links to svox2.egg-info/dependency_links.txt
    writing top-level names to svox2.egg-info/top_level.txt
    writing manifest file 'svox2.egg-info/SOURCES.txt'
    package init file 'svox2/csrc/__init__.py' not found (or not a regular file)
    reading manifest file 'svox2.egg-info/SOURCES.txt'
    adding license file 'LICENSE'
    writing manifest file 'svox2.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-x86_64/egg
    running install_lib
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.8
    creating build/lib.linux-x86_64-3.8/svox2
    copying svox2/svox2.py -> build/lib.linux-x86_64-3.8/svox2
    copying svox2/__init__.py -> build/lib.linux-x86_64-3.8/svox2
    copying svox2/version.py -> build/lib.linux-x86_64-3.8/svox2
    copying svox2/utils.py -> build/lib.linux-x86_64-3.8/svox2
    copying svox2/defs.py -> build/lib.linux-x86_64-3.8/svox2
    running build_ext
    building 'svox2.csrc' extension
    creating /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8
    creating /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2
    creating /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc
    Emitting ninja build file /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/build.ninja...
    Compiling objects...
    Using envvar MAX_JOBS (4) as the number of workers...
    [1/8] c++ -MMD -MF /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/svox2.o.d -pthread -B /home/nerf2themoon/anaconda3/envs/plenoxel/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/svox2.cpp -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/svox2.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    [2/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/render_lerp_kernel_cuvol.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_lerp_kernel_cuvol.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(582): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(633): warning: parameter "opt" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(634): warning: parameter "ray_id" was declared but never referenced
    
    [3/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/misc_kernel.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/misc_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(582): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(633): warning: parameter "opt" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(634): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/misc_kernel.cu(57): warning: function "<unnamed>::device::accel_linf_dist_transform_kernel" was declared but never referenced
    
    [4/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/svox2_kernel.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/svox2_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    [5/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/render_lerp_kernel_nvol.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_lerp_kernel_nvol.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(582): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(633): warning: parameter "opt" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(634): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/render_lerp_kernel_nvol.cu(112): warning: parameter "color_cache" was declared but never referenced
    
    [6/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/render_svox1_kernel.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_svox1_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(582): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(633): warning: parameter "opt" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(634): warning: parameter "ray_id" was declared but never referenced
    
    [7/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/loss_kernel.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/loss_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(582): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(633): warning: parameter "opt" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/include/render_util.cuh(634): warning: parameter "ray_id" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/loss_kernel.cu(23): warning: parameter "ndc_coeffx" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/loss_kernel.cu(24): warning: parameter "ndc_coeffy" was declared but never referenced
    
    /home/nerf2themoon/plenoxel/svox2/svox2/csrc/loss_kernel.cu(25): warning: parameter "z" was declared but never referenced
    
    [8/8] /usr/bin/nvcc  -I/home/nerf2themoon/plenoxel/svox2/svox2/csrc/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/TH -I/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/include/THC -I/home/nerf2themoon/anaconda3/envs/plenoxel/include/python3.8 -c -c /home/nerf2themoon/plenoxel/svox2/svox2/csrc/optim_kernel.cu -o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/optim_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
    g++ -pthread -shared -B /home/nerf2themoon/anaconda3/envs/plenoxel/compiler_compat -L/home/nerf2themoon/anaconda3/envs/plenoxel/lib -Wl,-rpath=/home/nerf2themoon/anaconda3/envs/plenoxel/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/svox2.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/svox2_kernel.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_lerp_kernel_cuvol.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_lerp_kernel_nvol.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/render_svox1_kernel.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/misc_kernel.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/loss_kernel.o /home/nerf2themoon/plenoxel/svox2/build/temp.linux-x86_64-3.8/svox2/csrc/optim_kernel.o -L/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/torch/lib -L/usr/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o build/lib.linux-x86_64-3.8/svox2/csrc.cpython-38-x86_64-linux-gnu.so
    creating build/bdist.linux-x86_64
    creating build/bdist.linux-x86_64/egg
    creating build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/svox2.py -> build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/__init__.py -> build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/version.py -> build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/csrc.cpython-38-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/utils.py -> build/bdist.linux-x86_64/egg/svox2
    copying build/lib.linux-x86_64-3.8/svox2/defs.py -> build/bdist.linux-x86_64/egg/svox2
    byte-compiling build/bdist.linux-x86_64/egg/svox2/svox2.py to svox2.cpython-38.pyc
    byte-compiling build/bdist.linux-x86_64/egg/svox2/__init__.py to __init__.cpython-38.pyc
    byte-compiling build/bdist.linux-x86_64/egg/svox2/version.py to version.cpython-38.pyc
    byte-compiling build/bdist.linux-x86_64/egg/svox2/utils.py to utils.cpython-38.pyc
    byte-compiling build/bdist.linux-x86_64/egg/svox2/defs.py to defs.cpython-38.pyc
    creating stub loader for svox2/csrc.cpython-38-x86_64-linux-gnu.so
    byte-compiling build/bdist.linux-x86_64/egg/svox2/csrc.py to csrc.cpython-38.pyc
    creating build/bdist.linux-x86_64/egg/EGG-INFO
    copying svox2.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying svox2.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying svox2.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying svox2.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying svox2.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
    creating dist
    creating 'dist/svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
    removing 'build/bdist.linux-x86_64/egg' (and everything under it)
    Processing svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg
    removing '/home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg' (and everything under it)
    creating /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg
    Extracting svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg to /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages
    svox2 0.0.1.dev0+sphtexcub.lincolor.fast is already the active version in easy-install.pth
    
    Installed /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2-0.0.1.dev0+sphtexcub.lincolor.fast-py3.8-linux-x86_64.egg
    Processing dependencies for svox2==0.0.1.dev0+sphtexcub.lincolor.fast
    Finished processing dependencies for svox2==0.0.1.dev0+sphtexcub.lincolor.fast
    

    I tried to install jaxlib as mentioned is other issues, install the latest CUDA toolkit and cuDNN, as well as install ninja, but none of them solved my problem. I also checked svox2.py, it seems that svox2 could not load C extentions correctly, since the ‘csrc.cpython-38-x86_64-linux-gnu.so’ emitted an exception when I tried to import it:

    In [2]: import svox2.csrc
    /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2/utils.py:39: UserWarning: CUDA extension svox2.csrc could not be loaded! Operations will be slow.
    Please do not import svox in the svox2 source directory.
      warn("CUDA extension svox2.csrc could not be loaded! " +
    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    Input In [2], in <module>
    ----> 1 import svox2.csrc
    
    ImportError: /home/nerf2themoon/anaconda3/envs/plenoxel/lib/python3.8/site-packages/svox2/csrc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv
    

    I’m using a Nvidia RTX 3080Ti on pop_os 21.10, cuda version 11.3. Is anybody kind enough to point out how to solve this problem? Thanks in advance!

    opened by GaiZhenbiao 3
  • view_data.py with custom dataset - error with `seg` indexing implementation

    view_data.py with custom dataset - error with `seg` indexing implementation

    Upon defining a dataset of images and running bash proc_colmap.sh, when running view_data.py I get an error when trying to pass an empty rotation matrix to scene.add_camera_frustrum from the nerfvis library. The error stems from a faulty list of segs as defined in view_data.py

        all_poses = []
        pnum, seg_begin = None, 0
        segs = []
        for i, pose_file in enumerate(pose_files):
            pose = np.loadtxt(path.join(pose_dir, pose_file)).reshape(4, 4)
            splt = path.splitext(pose_file)[0].split('_')
            num = int(splt[1] if len(splt) > 1 else splt[0])
            if pnum is not None and num - pnum > 1 and seg_begin < num:
                segs.append((seg_begin, num))
                seg_begin = num
            pnum = num
            all_poses.append(pose)
        all_poses = np.stack(all_poses)
        segs.append((seg_begin, len(pose_files)))
    

    Specifically, segs appears to be a list of 2-tuples corresponding to the connection between two pose matrices. Given my pose directory contents of

    ├── 0_00019.txt
    ├── 0_00095.txt
    ├── 0_00107.txt
    ├── 0_00134.txt
    ├── 0_00140.txt
    ├── 0_00143.txt
    ├── 0_00169.txt
    ├── 0_00187.txt
    ├── 0_00191.txt
    ├── 0_00197.txt
    ├── 0_00200.txt
    ├── 0_00212.txt
    ├── 0_00242.txt
    ├── 0_00251.txt
    ├── 0_00252.txt
    ├── 0_00260.txt
    ├── 0_00291.txt
    ├── 1_00017.txt
    └── 1_00259.txt
    

    segs evaluates to

    [(0, 19), (19, 95), (95, 107), (107, 134), (134, 140), (140, 143), (143, 169), (169, 187), (187, 191), (191, 197), (197, 200), (200, 212), (212, 242), (242, 251), (251, 259), (259, 291), (291, 19)]
    

    In the following loop

        for i, seg in enumerate(segs):
            print(seg)
            print(R.shape, t.shape)
            print(seg[0], seg[1])
            scene.add_camera_frustum(name=f"traj_{i:04d}", focal_length=focal,
                                     image_width=image_wh[0],
                                     image_height=image_wh[1],
                                     z=0.1,
                                     r=R[seg[0]:seg[1]],
                                     t=t[seg[0]:seg[1]],
                                     connect=args.seg,
                                     color=[1.0, 0.0, 0.0])
    

    The indexing of R and t thus yield errors.

    Just to get it running, I changed seg to an incremental list like [(0, 1), (1, 2), ... (n-1, n)], but I'm not sure what the original intention was here as the logic was unclear to me.

    opened by phelps-matthew 3
  • Pretrained models

    Pretrained models

    Could you, please, also upload the pretrained models? Because I could'nt compile svox2 so far, but I'm very interested in your work! (And I think all of your projects are very meaningful. Thanks!)

    opened by naruya 3
  • Minimum samples? for using a custom image set (360)

    Minimum samples? for using a custom image set (360)

    Hi,

    My objects are captured from an upper semi-sphere in relatively equally spaced elevations/azimuths and I have ~600 images per object. I'm interested in training on <100 images.

    Will this produce quality renderings on novel (test) views?

    What techniques, applicable to this codebase, are there for improving rendering quality (PSNR) with a low number of input samples?

    Thanks!

    opened by dispoth 2
  • Number of images

    Number of images

    What would be the minimal /optimal number of images required for high quality results? Would 4 extremely high quality images from front back left right work?

    opened by Jainam213 0
  • Model variable

    Model variable

    I am trying to change NeRF structure to Plenoxel in my code. The final results can be the same (images of the object from different sides), but can I actually directly use the model output the same way for both codes or not really? The model itself is represented by the variable "grid" for Plenoxel and by "render_kwargs_train" for NeRF?

    opened by povolann 0
  • Blury rendering for tanks and temples

    Blury rendering for tanks and temples

    I tried the tanks and temples dataset using the given tnt config, but the result seems to be very blurry. Does anyone have experience fixing this problem?

    opened by hjwdzh 0
Owner
Alex Yu
Researcher at UC Berkeley
Alex Yu
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

null 6.5k Jan 1, 2023
Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis

WASP2 (Currently in pre-development): Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis Requ

McVicker Lab 2 Aug 11, 2022
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

GRAF This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. You can find detailed usage i

null 349 Dec 29, 2022
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 9, 2023
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 2, 2023
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Google 625 Dec 30, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF ?? : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 8, 2023
pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Alex Yu 1k Jan 4, 2023
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 2, 2023