DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

Overview

DirectVoxGO

DirectVoxGO (Direct Voxel Grid Optimization, see our paper) reconstructs a scene representation from a set of calibrated images capturing the scene.

  • NeRF-comparable quality for synthesizing novel views from our scene representation.
  • Super-fast convergence: Our 15 mins/scene vs. NeRF's 10~20+ hrs/scene.
  • No cross-scene pre-training required: We optimize each scene from scratch.
  • Better rendering speed: Our <1 secs vs. NeRF's 29 secs to synthesize a 800x800 images.

Below run-times (mm:ss) of our optimization progress are measured on a machine with a single RTX 2080 Ti GPU.

github_teaser.mp4

Update

  • 2021.11.23: Support CO3D dataset.
  • 2021.11.23: Initial release. Issue page is disabled for now. Feel free to contact [email protected] if you have any questions.

Installation

git clone [email protected]:sunset1995/DirectVoxGO.git
cd DirectVoxGO
pip install -r requirements.txt

Pytorch installation is machine dependent, please install the correct version for your machine. The tested version is pytorch 1.8.1 with python 3.7.4.

Dependencies (click to expand)
  • PyTorch, numpy: main computation.
  • scipy, lpips: SSIM and LPIPS evaluation.
  • tqdm: progress bar.
  • mmcv: config system.
  • opencv-python: image processing.
  • imageio, imageio-ffmpeg: images and videos I/O.

Download: datasets, trained models, and rendered test views

Directory structure for the datasets (click to expand; only list used files)
data
├── nerf_synthetic     # Link: https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
│   └── [chair|drums|ficus|hotdog|lego|materials|mic|ship]
│       ├── [train|val|test]
│       │   └── r_*.png
│       └── transforms_[train|val|test].json
│
├── Synthetic_NSVF     # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip
│   └── [Bike|Lifestyle|Palace|Robot|Spaceship|Steamtrain|Toad|Wineholder]
│       ├── intrinsics.txt
│       ├── rgb
│       │   └── [0_train|1_val|2_test]_*.png
│       └── pose
│           └── [0_train|1_val|2_test]_*.txt
│
├── BlendedMVS         # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/BlendedMVS.zip
│   └── [Character|Fountain|Jade|Statues]
│       ├── intrinsics.txt
│       ├── rgb
│       │   └── [0|1|2]_*.png
│       └── pose
│           └── [0|1|2]_*.txt
│
├── TanksAndTemple     # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip
│   └── [Barn|Caterpillar|Family|Ignatius|Truck]
│       ├── intrinsics.txt
│       ├── rgb
│       │   └── [0|1|2]_*.png
│       └── pose
│           └── [0|1|2]_*.txt
│
├── deepvoxels     # Link: https://drive.google.com/drive/folders/1ScsRlnzy9Bd_n-xw83SP-0t548v63mPH
│   └── [train|validation|test]
│       └── [armchair|cube|greek|vase]
│           ├── intrinsics.txt
│           ├── rgb/*.png
│           └── pose/*.txt
│
└── co3d               # Link: https://github.com/facebookresearch/co3d
    └── [donut|teddybear|umbrella|...]
        ├── frame_annotations.jgz
        ├── set_lists.json
        └── [129_14950_29917|189_20376_35616|...]
            ├── images
            │   └── frame*.jpg
            └── masks
                └── frame*.png

Synthetic-NeRF, Synthetic-NSVF, BlendedMVS, Tanks&Temples, DeepVoxels datasets

We use the datasets organized by NeRF, NSVF, and DeepVoxels. Download links:

Download all our trained models and rendered test views at this link to our logs.

CO3D dataset

We also support the recent Common Objects In 3D dataset. Our method only performs per-scene reconstruction and no cross-scene generalization.

GO

Train

To train lego scene and evaluate testset PSNR at the end of training, run:

$ python run.py --config configs/nerf/lego.py --render_test

Use --i_print and --i_weights to change the log interval.

Evaluation

To only evaluate the testset PSNR, SSIM, and LPIPS of the trained lego without re-training, run:

$ python run.py --config configs/nerf/lego.py --render_only --render_test \
                                              --eval_ssim --eval_lpips_vgg

Use --eval_lpips_alex to evaluate LPIPS with pre-trained Alex net instead of VGG net.

Reproduction

All config files to reproduce our results:

$ ls configs/*
configs/blendedmvs:
Character.py  Fountain.py  Jade.py  Statues.py

configs/nerf:
chair.py  drums.py  ficus.py  hotdog.py  lego.py  materials.py  mic.py  ship.py

configs/nsvf:
Bike.py  Lifestyle.py  Palace.py  Robot.py  Spaceship.py  Steamtrain.py  Toad.py  Wineholder.py

configs/tankstemple:
Barn.py  Caterpillar.py  Family.py  Ignatius.py  Truck.py

configs/deepvoxels:
armchair.py  cube.py  greek.py  vase.py

Your own config files

Check the comments in configs/default.py for the configuable settings. The default values reproduce our main setup reported in our paper. We use mmcv's config system. To create a new config, please inherit configs/default.py first and then update the fields you want. Below is an example from configs/blendedmvs/Character.py:

_base_ = '../default.py'

expname = 'dvgo_Character'
basedir = './logs/blended_mvs'

data = dict(
    datadir='./data/BlendedMVS/Character/',
    dataset_type='blendedmvs',
    inverse_y=True,
    white_bkgd=True,
)

Development and tuning guide

Extention to new dataset

Adjusting the data related config fields to fit your camera coordinate system is recommend before implementing a new one. We provide two visualization tools for debugging.

  1. Inspect the camera and the allocated BBox.
    • Export via --export_bbox_and_cams_only {filename}.npz:
      python run.py --config configs/nerf/mic.py --export_bbox_and_cams_only cam_mic.npz
    • Visualize the result:
      python tools/vis_train.py cam_mic.npz
  2. Inspect the learned geometry after coarse optimization.
    • Export via --export_coarse_only {filename}.npz (assumed coarse_last.tar available in the train log):
      python run.py --config configs/nerf/mic.py --export_coarse_only coarse_mic.npz
    • Visualize the result:
      python tools/vis_volume.py coarse_mic.npz 0.001 --cam cam_mic.npz
Inspecting the cameras & BBox Inspecting the learned coarse volume

Speed and quality tradeoff

We have reported some ablation experiments in our paper supplementary material. Setting N_iters, N_rand, num_voxels, rgbnet_depth, rgbnet_width to larger values or setting stepsize to smaller values typically leads to better quality but need more computation. Only stepsize is tunable in testing phase, while all the other fields should remain the same as training.

Acknowledgement

The code base is origined from an awesome nerf-pytorch implementation, but it becomes very different from the code base now.

Comments
  • CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    Many thanks to the author for his contribution to this work, But I'm having some difficulty implementing, which is following:

    Using C:\Users\shower\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu116 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file C:\Users\shower\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu116\adam_upd_cuda\build.ninja... Building extension module adam_upd_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) Traceback (most recent call last):

    File ~\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:1808 in _run_ninja_build subprocess.run(

    File ~\anaconda3\lib\subprocess.py:528 in run raise CalledProcessError(retcode, process.args,

    CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):

    File D:\DirectVoxGO-main\run.py:13 in from lib import utils, dvgo, dcvgo, dmpigo

    File D:\DirectVoxGO-main\lib\utils.py:11 in from .masked_adam import MaskedAdam

    File D:\DirectVoxGO-main\lib\masked_adam.py:8 in adam_upd_cuda = load(

    File ~\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:1202 in load return _jit_compile(

    File ~\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:1425 in _jit_compile _write_ninja_file_and_build_library(

    File ~\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:1537 in _write_ninja_file_and_build_library _run_ninja_build(

    File ~\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:1824 in _run_ninja_build raise RuntimeError(message) from e

    RuntimeError: Error building extension 'adam_upd_cuda'

    I have tried many methods including looking for the source of extension_cpp.py or updating and modifying the version of vs and so on, but I can't solve it, I hope to get your help to achieve this work, and I have no experience in compiling so I can‘t sure where the problem is.

    Additionally, my environment is Windows11, torch 1.12.0, cuda 11.6 , vs2017 and python 3.9. Hope this extra configuration information will help you find the problem in my work better, thanks again.

    opened by Ballzy0706 7
  • Run process always killed

    Run process always killed

    Hi,

    Thanks for sharing your work, I am getting some trouble executing your script with success using nerf_synthetic dataset as described in README, process seems to be killed because there is not enough memory, but not sure if it's VRAM or RAM. Similar output with Evulation or video rendering commands. Here is the output. Do you have any idea how to reduce memory consumption ? Thanks

    Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
    Detected CUDA files, patching ldflags
    Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/adam_upd_cuda/build.ninja...
    Building extension module adam_upd_cuda...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    ninja: no work to do.
    Loading extension module adam_upd_cuda...
    Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
    Detected CUDA files, patching ldflags
    Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/render_utils_cuda/build.ninja...
    Building extension module render_utils_cuda...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    ninja: no work to do.
    Loading extension module render_utils_cuda...
    Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
    Detected CUDA files, patching ldflags
    Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/total_variation_cuda/build.ninja...
    Building extension module total_variation_cuda...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    ninja: no work to do.
    Loading extension module total_variation_cuda...
    Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
    No modifications detected for re-loaded extension module render_utils_cuda, skipping build step...
    Loading extension module render_utils_cuda...
    Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
    Detected CUDA files, patching ldflags
    Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/ub360_utils_cuda/build.ninja...
    Building extension module ub360_utils_cuda...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    ninja: no work to do.
    Loading extension module ub360_utils_cuda...
    Loaded blender (400, 800, 800, 4) torch.Size([160, 4, 4]) [800, 800, 1111.1110311937682] ./data/nerf_synthetic/lego
    Killed
    
    opened by x0s 2
  • Cannot reproduce high quality results on Madoka and Otobai

    Cannot reproduce high quality results on Madoka and Otobai

    Hi, thanks for your wonderful work! I'm working on reproducing your results on Madoka and Otobai, after training via your given configs. I can only get results with low PSNRs (~31 for Madoka and ~24 for Otobai). What could be the problems there?

    opened by sjtuytc 2
  • near clip loss is always zero

    near clip loss is always zero

    When calculating nearclip_loss in scene_rep_reconstruction, it always seems to 0.

    if cfg_train.weight_nearclip > 0:
        near_thres = data_dict['near_clip'] / model.scene_radius[0].item()
        near_mask = (render_result['t'] < near_thres)
        density = render_result['raw_density'][near_mask]
        if len(density):
            nearclip_loss = (density - density.detach()).sum()
            loss += cfg_train.weight_nearclip * nearclip_loss
    

    What was the original intent of that code?

    opened by sp9103 2
  • Do camera poses all distributed in one plane affect the results?

    Do camera poses all distributed in one plane affect the results?

    Thanks for your great work!!! @sunset1995 If the camera pose is to shoot a circle around the object and rotate in a plane, will the bbox be very flat? HU}GZF292G9OJ6EJGIH Experiments on my own dataset found that the reconstructed objects are also flat.

    opened by zhouzhenghong-gt 2
  • About world_size for density

    About world_size for density

    Thanks for your great work. I want to know where you set "world_size" for density constuction? And what's the shape of it? https://github.com/sunset1995/DirectVoxGO/blob/main/lib/dvgo.py#L48

    opened by DRosemei 2
  • Problem with processed Tanks and Temples

    Problem with processed Tanks and Temples

    Hello everyone, thank you for releasing this great technique. I was testing you technoque on you processed tenaks and Temples scenes, however, I noticed that the Ignatius scene seems to be having a problem with the Intrinsics. It is formated in a diferent way than the other scenes and the folder structure is also different. Because of this, I could not test in this scene. Thank you for the attention!

    opened by danperazzo 2
  • several reconstructed objects in the scene but cannot combine together to a single object

    several reconstructed objects in the scene but cannot combine together to a single object

    image

    I tried to reconstruct a human in the middle using the cmu panoptic data, but there seems to be several reconstruced human in the scene, how can I adjust the camera system?

    opened by neilgogogo 2
  • About raw2alpha cuda code

    About raw2alpha cuda code

    Hi, thanks for your great work. The code that calculates the alpha value from density uses this one :

    alpha = 1 - (1 + exp(density + shift)) ^ (-interval)

    However, I found that the original nerf uses the equation to calculate alpha from density as :

    alpha = 1 - exp(-density * interval)

    My question is

    (1) What is the difference between those two representations?

    (2) Why do we have to use a custom cuda kernel for calculating alpha instead of using torch library? I am curious about this in terms of the improvement of speed.

    opened by MinJunKang 1
  • Maybe some typos in total_variation_kernel.cu

    Maybe some typos in total_variation_kernel.cu

    in lib/cuda/total_variation_kernel.cu

        grad_to_add += (k==0      ? 0 : wz * clamp(param[index]-param[index-1], -1.f, 1.f));
        grad_to_add += (k==sz_k-1 ? 0 : wz * clamp(param[index]-param[index+1], -1.f, 1.f));
        grad_to_add += (j==0      ? 0 : wy * clamp(param[index]-param[index-sz_k], -1.f, 1.f));
        grad_to_add += (j==sz_j-1 ? 0 : wy * clamp(param[index]-param[index+sz_k], -1.f, 1.f));
        grad_to_add += (i==0      ? 0 : wz * clamp(param[index]-param[index-sz_k*sz_j], -1.f, 1.f));
        grad_to_add += (i==sz_i-1 ? 0 : wz * clamp(param[index]-param[index+sz_k*sz_j], -1.f, 1.f));
        grad[index] += grad_to_add;
    

    It seems wx is not used there, maybe it's typos?

    opened by chuchong 1
  • IndexError: list index out of range

    IndexError: list index out of range

    hi, thanks for your work.

    I am using CUDA 11.3, Pytorch 1.11.0. When I run python run.py --config configs/lf/ship.py --render_test, I get:

    Using /home/s/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/s/.cache/torch_extensions/py39_cu113/adam_upd_cuda/build.ninja... Building extension module adam_upd_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module adam_upd_cuda... Using /home/s/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/s/.cache/torch_extensions/py39_cu113/render_utils_cuda/build.ninja... Building extension module render_utils_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module render_utils_cuda... Using /home/s/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/s/.cache/torch_extensions/py39_cu113/total_variation_cuda/build.ninja... Building extension module total_variation_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module total_variation_cuda... Using /home/s/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... No modifications detected for re-loaded extension module render_utils_cuda, skipping build step... Loading extension module render_utils_cuda... Using /home/s/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... Creating extension directory /home/s/.cache/torch_extensions/py39_cu113/ub360_utils_cuda... Detected CUDA files, patching ldflags Emitting ninja build file /home/s/.cache/torch_extensions/py39_cu113/ub360_utils_cuda/build.ninja... Building extension module ub360_utils_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/3] c++ -MMD -MF ub360_utils.o.d -DTORCH_EXTENSION_NAME=ub360_utils_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/TH -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda-11.3/include -isystem /home/s/anaconda3/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp -o ub360_utils.o In file included from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/c10/core/DeviceType.h:8, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/c10/core/Device.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/c10/core/Allocator.h:6, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/ATen/ATen.h:7, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/extension.h:4, from /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp:1: /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp: In function ‘at::Tensor cumdist_thres(at::Tensor, float)’: /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp:11:42: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 11 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp:13:24: note: in expansion of macro ‘CHECK_CUDA’ 13 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp:16:3: note: in expansion of macro ‘CHECK_INPUT’ 16 | CHECK_INPUT(dist); | ^~~~~~~~~~~ In file included from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/ATen/core/Tensor.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/ATen/DeviceGuard.h:4, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/ATen/ATen.h:11, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/extension.h:4, from /home/s/DirectVoxGO/lib/cuda/ub360_utils.cpp:1: /home/s/anaconda3/lib/python3.9/site-packages/torch/include/ATen/core/TensorBody.h:210:30: note: declared here 210 | DeprecatedTypeProperties & type() const { | ^~~~ [2/3] /usr/local/cuda-11.3/bin/nvcc -DTORCH_EXTENSION_NAME=ub360_utils_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/TH -isystem /home/s/anaconda3/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda-11.3/include -isystem /home/s/anaconda3/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -std=c++14 -c /home/s/DirectVoxGO/lib/cuda/ub360_utils_kernel.cu -o ub360_utils_kernel.cuda.o [3/3] c++ ub360_utils.o ub360_utils_kernel.cuda.o -shared -L/home/s/anaconda3/lib/python3.9/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/usr/local/cuda-11.3/lib64 -lcudart -o ub360_utils_cuda.so Loading extension module ub360_utils_cuda... Traceback (most recent call last): File "/home/s/DirectVoxGO/run.py", line 593, in data_dict = load_everything(args=args, cfg=cfg) File "/home/s/DirectVoxGO/run.py", line 167, in load_everything data_dict = load_data(cfg.data) File "/home/s/DirectVoxGO/lib/load_data.py", line 127, in load_data images, poses, render_poses, hwf, K, i_split = load_nerfpp_data(args.datadir) File "/home/s/DirectVoxGO/lib/load_nerfpp.py", line 122, in load_nerfpp_data K_flatten = np.loadtxt(tr_K[0]) IndexError: list index out of range

    I am not solving the above error. How can I solve this? I hope to hear from you soon!! Thank you:)

    opened by songjueun 1
  • RuntimeError: value cannot be converted to type int without overflow

    RuntimeError: value cannot be converted to type int without overflow

    I want to reproduce this work on my platform. It seems the environment is already set up.

    (the torch version is 1.12.1+cu113, the gcc version is 7.5.0, and the nvcc version is 11.0.194)

    However, when I ran the command "python run.py --config configs/nerf/hotdog.py --render_test", the overflow issue occurred.

    Following is the Traceback code.

    Traceback (most recent call last):
      File "run.py", line 630, in <module>
        train(args, cfg, data_dict)
      File "run.py", line 545, in train
        data_dict=data_dict, stage='coarse')
      File "run.py", line 449, in scene_rep_reconstruction
        **render_kwargs)
      File "/opt/pyenv/versions/mlhw/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/user/DirectVoxGO/lib/dvgo.py", line 309, in forward
        rays_o=rays_o, rays_d=rays_d, **render_kwargs)
      File "/home/user/DirectVoxGO/lib/dvgo.py", line 288, in sample_ray
        ray_pts, mask_outbbox, ray_id, step_id, N_steps, t_min, t_max = render_utils_cuda.sample_pts_on_rays(
    RuntimeError: value cannot be converted to type int without overflow
    
    opened by jimjimjimjimjimjimjimjimjim 0
  • Unbounded scene config

    Unbounded scene config

    I see the coarse_train config setting is N_iters = 0 in nerf_unbounded_default.py. https://github.com/sunset1995/DirectVoxGO/blob/main/configs/nerf_unbounded/nerf_unbounded_default.py#L16 What's the purpose of this setting? It seems we skip the coarse stage.

    Another question, how to inspect the fine model training result similar like --export_coarse_only flag? It seems if we skip the coarse stage, we don't have model to inspect before the fine stage. Thanks.

    opened by Learningm 0
  • Can you share the DVGOv1 source code?

    Can you share the DVGOv1 source code?

    Thanks for sharing this amazing work !👍👍

    My team is following your work and I need to do a comparative experiment between DVGOv1 and DVGOv2, so can you release or share DVGOv1 source code (w/o cuda and distortion loss)?

    Thank for your time and looking forward to your future work!

    opened by martyLY 1
  • Do we need to init DenseGrid with random noise?

    Do we need to init DenseGrid with random noise?

    Thanks for your great work! I am reading your codes, and find that you init the DenseGrid with zero. Do we need to init DenseGrid with random noise? Just like how you did in TensoRFGrid.

    https://github.com/sunset1995/DirectVoxGO/blob/341e1fc4e96efff146d42cd6f31b8199a3e536f7/lib/grid.py#L45

    opened by CZ-Wu 0
  • Bad quality on custom unbounded inward dataset

    Bad quality on custom unbounded inward dataset

    I am trying to use DVGO to reconstruct synthetic 3D scenes from the Replica dataset. I gathered an inward facing trajectory of images and poses (generated by pose_spherical with 20 theta angles * 3 phi angles * 3 heights = 180 poses). Since I generate poses first and then use habitat-sim to get the camera view at the poses, I don't have to run colmap. The following is the visualization of camera poses using tools/vis_train.py.
    image

    I use the config default_ubn_inward_facing.py with nerfpp dataset. After training, if I use the same training trajectory for testing, DVGO would perfectly render the views. This validates that my image-to-pose mapping is correct. However, if I use an adjested, still inward-facing trajectory for testing, there are distortions as shown bellow (first is rendered by DVGO, second is the ground truth from habitat-sim, they don't have the exact same pose, but show roughly the same view). image image

    My questions:

    1. How to improve DVGO's quality and reduce distortions in my case?
    • How should I capture a trajectory on which DVGO would better optimize? Is the current trajectory too "regular" (oval-shaped) and that I should add some variations to the poses? Would it help if I simply have a longer trajectory (denser theta angles and heights, wider phi angle range) in the current oval shape?
    • What configuration should I tune to reduce the distortions?
    1. Also, both the camera bounding box and the white background on the right of the DVGO rendered image seem that DVGO computes a bounding box that doesn't cover the whole room, and that out-of-bounding-box places are not learned and left white. I actually have the exact bounding box computed for the 3D scene. Is there a way to manually set the bounding box for DVGO?
    opened by desaixie 1
Owner
sunset
A Ph.D. candidate working on computer vision tasks. Recently focusing on 3D modeling.
sunset
Learning Calibrated-Guidance for Object Detection in Aerial Images

Learning Calibrated-Guidance for Object Detection in Aerial Images arxiv We propose a simple yet effective Calibrated-Guidance (CG) scheme to enhance

null 51 Sep 22, 2022
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

null 6 Jun 27, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Neural Ensemble Search for Performant and Calibrated Predictions

Neural Ensemble Search Introduction This repo contains the code accompanying the paper: Neural Ensemble Search for Performant and Calibrated Predictio

AutoML-Freiburg-Hannover 26 Dec 12, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

null 168 Nov 29, 2022
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective Zhengzhuo Xu, Zenghao Chai, Chun Yuan This is the PyTorch implement

Sincere 16 Dec 15, 2022
[NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images

Unsupervised Object-Level Representation Learning from Scene Images This repository contains the official PyTorch implementation of the ORL algorithm

Jiahao Xie 55 Dec 3, 2022
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

null 35 Nov 16, 2022
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

null 194 Jan 3, 2023
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

null 41 Jan 6, 2023
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 2, 2023
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

null 43 Dec 14, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

null 43 Dec 14, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 7, 2023
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022