Point-NeRF: Point-based Neural Radiance Fields

Overview

Point-NeRF: Point-based Neural Radiance Fields

Project Sites | Paper | Primary contact: Qiangeng Xu

Point-NeRF uses neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism.

Reference

Please cite our paper if you are interested
Point-NeRF: Point-based Neural Radiance Fields.    

@article{xu2022point,
  title={Point-NeRF: Point-based Neural Radiance Fields},
  author={Xu, Qiangeng and Xu, Zexiang and Philip, Julien and Bi, Sai and Shu, Zhixin and Sunkavalli, Kalyan and Neumann, Ulrich},
  journal={arXiv preprint arXiv:2201.08845},
  year={2022}
}

Overal Instruction

  1. Please first install the libraries as below and download/prepare the datasets as instructed.
  2. Point Initialization: Download pre-trained MVSNet as below and train the feature extraction from scratch or directly download the pre-trained models. (Obtain 'MVSNet' and 'init' folder in checkpoints folder)
  3. Per-scene Optimization: Download pre-trained models or optimize from scratch as instructed.

We provide all the checkpoint files (google drive) and all the test results images and scores (google drive)

Installation

Requirements

All the codes are tested in the following environment:

  • Linux (tested on Ubuntu 16.04, 18.04, 20.04)
  • Python 3.6+
  • PyTorch 1.7 or higher (tested on PyTorch 1.7, 1.8.1, 1.9, 1.10)
  • CUDA 10.2 or higher

Install

Install the dependent libraries as follows:

  • Install the dependent python libraries:
pip install torch==1.8.1+cu102 h5py
pip install imageio scikit-image

We develope our code with pytorch1.8.1 and pycuda2021.1

Data Preparation

The layout should looks like this:

pointnerf
├── data_src
│   ├── dtu
    │   │   │──Cameras
    │   │   │──Depths
    │   │   │──Depths_raw
    │   │   │──Rectified
    ├── nerf
    │   │   │──nerf_synthetic
    ├── nsvf
    │   │   │──Synthetic_NeRF
    ├── scannet
    │   │   │──scans 
    |   │   │   │──scene0101_04
    |   │   │   │──scene0241_01

DTU:

Download the preprocessed DTU training data and Depth_raw from original MVSNet repo and unzip.

NeRF Synthetic

Download nerf_synthetic.zip from here under ``data_src/nerf/''

Tanks & Temples

Follow Neural Sparse Voxel Fields and download Tanks&Temples | download (.zip) | 0_* (training) 1_* (testing) under: ``data_src/nsvf/''

ScanNet

Download and extract ScanNet by following the instructions provided at http://www.scan-net.org/. The detailed steps including:

  • Go to http://www.scan-net.org and fill & sent the request form.
  • You will get a email that has command instruction and a download-scannet.py file, this file is for python 2, you can use our download-scannet.py in the ``data'' directory for python 3.
  • clone the official repo:
    git clone https://github.com/ScanNet/ScanNet.git
    
  • Download specific scenes (used by NSVF):
     python data/download-scannet.py -o ../data_src/scannet/ id scene0101_04 
     python data/download-scannet.py -o ../data_src/scannet/ id scene0241_01
    
  • Process the sens files:
      python ScanNet/SensReader/python/reader.py --filename data_src/nrData/scannet/scans/scene0101_04/scene0101_04.sens  --output_path data_src/nrData/scannet/scans/scene0101_04/exported/ --export_depth_images --export_color_images --export_poses --export_intrinsics
      
      python ScanNet/SensReader/python/reader.py --filename data_src/nrData/scannet/scans/scene0241_01/scene0241_01.sens  --output_path data_src/nrData/scannet/scans/scene0241_01/exported/ --export_depth_images --export_color_images --export_poses --export_intrinsics
    

Point Initialization / Generalization:

  Download pre-trained MVSNet checkpoints:

We trained MVSNet on DTU. You can Download ''MVSNet'' directory from google drive and place them under '''checkpoints/'''

  Train 2D feature extraction and point representation

  Directly use our trained checkpoints files:

Download ''init'' directory from google drive. and place them under '''checkpoints/'''

  Or train from scratch:

Train for point features of 63 channels (as in paper)

bash dev_scripts/ete/dtu_dgt_d012_img0123_conf_color_dir_agg2.sh

Train for point features of 32 channels (better for per-scene optimization)

bash dev_scripts/ete/dtu_dgt_d012_img0123_conf_agg2_32_dirclr20.sh

After the training, you should pick a checkpoint and rename it to best checkpoint, e.g.:

cp checkpoints/dtu_dgt_d012_img0123_conf_color_dir_agg2/250000_net_ray_marching.pth  checkpoints/dtu_dgt_d012_img0123_conf_color_dir_agg2/best_net_ray_marching.pth

cp checkpoints/dtu_dgt_d012_img0123_conf_color_dir_agg2/250000_net_mvs.pth  checkpoints/dtu_dgt_d012_img0123_conf_color_dir_agg2/best_net_mvs.pth

  Test feed forward inference on dtu scenes

These scenes that are selected by MVSNeRF, please also refer their code to understand the metrics calculation.

bash dev_scripts/dtu_test_inf/inftest_scan1.sh
bash dev_scripts/dtu_test_inf/inftest_scan8.sh
bash dev_scripts/dtu_test_inf/inftest_scan21.sh
bash dev_scripts/dtu_test_inf/inftest_scan103.sh
bash dev_scripts/dtu_test_inf/inftest_scan114.sh

Per-scene Optimization:

(Please visit the project sites to see the original videos of above scenes, which have quality loss when being converted to gif files here.)

Download per-scene optimized Point-NeRFs

You can skip training and download the folders of ''nerfsynth'', ''tanksntemples'' and ''scannet'' here google drive, and place them in ''checkpoints/''.

pointnerf
├── checkpoints
│   ├── init
    ├── MVSNet
    ├── nerfsynth
    ├── scannet
    ├── tanksntemples

In each scene, we provide initialized point features and network weights ''0_net_ray_marching.pth'', points and weights at 20K steps ''20000_net_ray_marching.pth'' and 200K steps ''200000_net_ray_marching.pth''

Test the per-scene optimized Point-NeRFs

NeRF Synthetics

test scripts
    bash dev_scripts/w_n360/chair_test.sh
    bash dev_scripts/w_n360/drums_test.sh
    bash dev_scripts/w_n360/ficus_test.sh
    bash dev_scripts/w_n360/hotdog_test.sh
    bash dev_scripts/w_n360/lego_test.sh
    bash dev_scripts/w_n360/materials_test.sh
    bash dev_scripts/w_n360/mic_test.sh
    bash dev_scripts/w_n360/ship_test.sh

ScanNet

test scripts
    bash dev_scripts/w_scannet_etf/scane101_test.sh
    bash dev_scripts/w_scannet_etf/scane241_test.sh

Tanks & Temples

test scripts
    bash dev_scripts/w_tt_ft/barn_test.sh
    bash dev_scripts/w_tt_ft/caterpillar_test.sh
    bash dev_scripts/w_tt_ft/family_test.sh
    bash dev_scripts/w_tt_ft/ignatius_test.sh
    bash dev_scripts/w_tt_ft/truck_test.sh

Per-scene optimize from scatch

Make sure the ''checkpoints'' folder has ''init'' and ''MVSNet''. The training scripts will start to do initialization if there is no ''.pth'' files in a scene folder. It will start from the last ''.pth'' files until reach the iteration of ''maximum_step''.

NeRF Synthetics

train scripts
    bash dev_scripts/w_n360/chair.sh
    bash dev_scripts/w_n360/drums.sh
    bash dev_scripts/w_n360/ficus.sh
    bash dev_scripts/w_n360/hotdog.sh
    bash dev_scripts/w_n360/lego.sh
    bash dev_scripts/w_n360/materials.sh
    bash dev_scripts/w_n360/mic.sh
    bash dev_scripts/w_n360/ship.sh

ScanNet

train scripts
    bash dev_scripts/w_scannet_etf/scane101.sh
    bash dev_scripts/w_scannet_etf/scane241.sh

Tanks & Temples

train scripts
    bash dev_scripts/w_tt_ft/barn.sh
    bash dev_scripts/w_tt_ft/caterpillar.sh
    bash dev_scripts/w_tt_ft/family.sh
    bash dev_scripts/w_tt_ft/ignatius.sh
    bash dev_scripts/w_tt_ft/truck.sh

Acknowledgement

Our repo is developed based on MVSNet, NeRF, MVSNeRF, and NSVF.

Please also consider citing the corresponding papers.

The project is conducted collaboratively between Adobe Research and University of Southern California.

LICENSE

The repo is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0, and is restricted to academic use only. See LICENSE.

Comments
  • FileNotFoundError of pointcloud

    FileNotFoundError of pointcloud

    Hi, thanks for your great job! I have the same problem when I run "bash dev_scripts/w_n360/ship_test.sh". It goes with "FileNotFoundError: [Errno 2] No such file or directory" in line 118 of load_blender.py. My datapath is pointnerf/data_src/nerf/nerf_synthetic/ship,and it includes 3 .json files and 3 folders containing some .png pictures. At the same time, the checkpoints folder only include some .pth files. It doesn't seem to contain the saved point cloud.

    Could you please tell me where the "point_path" is? Thank you~

    opened by Kathygg90 19
  • loss item

    loss item

    Thanks for sharing this work. i am a little confused about the loss item. i know that 'coarse_raycolor' is corresponding to L_render in paper, what is 'ray_masked_loss' and 'ray_miss_loss'? And if i understand correctly, 'zero_one_loss_items' is the L_sparse in paper. but since we already set the neural points as input data and didn't load mvsnet model, can the parameters of mvsnet also be updated? (when i inspect the variable , the only model is "ray_marching") And you set the neural points to nn.parameter to update, any reasons behind it?

    opened by roywithfiringblade 9
  • Arguments are required --name

    Arguments are required --name

    Hi everyone, I have a problem when launching bash scripts like "bash dev_scripts/w_n360/chair.sh", the argument 'name' is not detected and the terminal returns me the error: train_ft.py: error: the following arguments are required: --name Did any of you had the same problem?

    opened by hugobl1 4
  • Why is it slow for Per-scene Optimization?

    Why is it slow for Per-scene Optimization?

    I run this command to do Per-scene optimize from scratch: bash dev_scripts/w_n360/lego.sh

    In the paper, the speed should be 2min / 1K iters. However, on my single 2080Ti, it takes 100min / 1K iters. Anyone meets this problem?

    opened by ShaoTengLiu 3
  • pycuda.driver.CompileError

    pycuda.driver.CompileError

    Hello, Thanks for the awesome work. However when I run the Per-scene Optimization on NeRF-Synthetics I found some error below:

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Debug Mode ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ /home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/numpy/core/shape_base.py:420: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. arrays = [asanyarray(arr) for arr in arrays] /home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] dataset total: train 100 dataset [NerfSynthFtDataset] was created ../checkpoints/nerfsynth/lego/*_net_ray_marching.pth ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Continue training from 200000 epoch Iter: 200000 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ opt.act_type!!!!!!!!! LeakyReLU self.points_embeding torch.Size([1, 479862, 32]) querier device cuda:3 3 Traceback (most recent call last): File "train_ft.py", line 1084, in main() File "train_ft.py", line 637, in main model = create_model(opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/init.py", line 39, in create_model instance.initialize(opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/base_rendering_model.py", line 369, in initialize self.create_network_models(opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/mvs_points_volumetric_model.py", line 44, in create_network_models super(MvsPointsVolumetricModel, self).create_network_models(opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/neural_points_volumetric_model.py", line 157, in create_network_models params = self.get_additional_network_params(opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/neural_points_volumetric_model.py", line 142, in get_additional_network_params self.neural_points = NeuralPoints(opt.point_features_dim, opt.num_point, opt, self.device, checkpoint=checkpoint_path, feature_init_method=opt.feature_init_method, reg_weight=0., feedforward=opt.feedforward) File "/mnt/data1/zhaoboming/pointnerf/run/../models/neural_points/neural_points.py", line 331, in init self.querier = self.lighting_fast_querier(device, self.opt) File "/mnt/data1/zhaoboming/pointnerf/run/../models/neural_points/query_point_indices_worldcoords.py", line 39, in init self.claim_occ, self.map_coor2occ, self.fill_occ2pnts, self.mask_raypos, self.get_shadingloc, self.query_along_ray = self.build_cuda() File "/mnt/data1/zhaoboming/pointnerf/run/../models/neural_points/query_point_indices_worldcoords.py", line 524, in build_cuda """, no_extern_c=True) File "/home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/pycuda-2021.1-py3.7-linux-x86_64.egg/pycuda/compiler.py", line 358, in init include_dirs, File "/home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/pycuda-2021.1-py3.7-linux-x86_64.egg/pycuda/compiler.py", line 298, in compile return compile_plain(source, options, keep, nvcc, cache_dir, target) File "/home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/pycuda-2021.1-py3.7-linux-x86_64.egg/pycuda/compiler.py", line 87, in compile_plain checksum.update(preprocess_source(source, options, nvcc).encode("utf-8")) File "/home/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/pycuda-2021.1-py3.7-linux-x86_64.egg/pycuda/compiler.py", line 59, in preprocess_source "nvcc preprocessing of %s failed" % source_path, cmdline, stderr=stderr pycuda.driver.CompileError: nvcc preprocessing of /tmp/tmpeytiys5x.cu failed [command: nvcc --preprocess -arch sm_86 -I/mnt/data1/zhaoboming/anaconda3/envs/nerf-w/lib/python3.7/site-packages/pycuda-2021.1-py3.7-linux-x86_64.egg/pycuda/cuda /tmp/tmpeytiys5x.cu --compiler-options -P] [stderr: b'cc1plus: fatal error: cuda_runtime.h: No such file or directory\ncompilation terminated.\n'] end loading


    PyCUDA ERROR: The context stack was not empty upon module cleanup.

    A context was still active when the context stack was being cleaned up. At this point in our execution, CUDA may already have been deinitialized, so there is no way we can finish cleanly. The program will be aborted now. Use Context.pop() to avoid this problem.

    So How can I fix it? Thanks.

    opened by BoMingZhao 3
  • "No such file or directory" only for scannet scene

    Hi , when I redownload the code repo and run from scratch ,It still encounters with the problem of "No such file or directory" when I run scene101.sh ,However it works fine if I run from scratch in the dataset of Nerf Synth. Is there some possiblity that I should change some setting for the scannet scene. https://github.com/Xharlie/pointnerf/issues/7#issuecomment-1060343926

    opened by Gardlin 3
  • Loss

    Loss

    In base_rendering_model, loss = self.l2loss(masked_output, masked_gt) * masked_gt.shape[1], why multiply masked_gt.shape[1]? https://github.com/Xharlie/pointnerf/blob/a614e1d15cc8409e14f80c7b3dd0091939fef758/models/base_rendering_model.py#L560

    opened by caiyongqi 3
  • How to use point cloud as input

    How to use point cloud as input

    Hello authors, thank you very much for your paper and for releasing code! I am very excited about this work. I have gotten point cloud data in a ply format somehow, how do I use this data as input to point_nerf. I don't seem to find it in "READEME.md". image

    opened by helloCZZ 3
  • Why you use 1088x640 instead of the original 1920x1080 for Tanks&Temples?

    Why you use 1088x640 instead of the original 1920x1080 for Tanks&Temples?

    Hi,

    We evaluated your code on the Tanks&Temples dataset and found that the images outputted by your model have a size (resolution) of 1088x640‬, with a different ratio from the original image size of 1920x1080. It's also shown in your config file that you manually set it as 1088x640‬.

    For Table 9 in your paper, you compared the results with NSVF, and the numbers of NSVF were the same as those in NSVF paper, which also claims that "The image resolution is 1920 × 1080" for Tanks&Temples.

    Why did you choose the resolution of 1088x640? Were the metrics in Table 9 in your paper calculated with images with a resolution of 1088x640 or 1920x1080?

    Thanks.

    opened by immortalCO 2
  • DTU dataset download

    DTU dataset download

    Hello,

    Thanks for the awesome work. However, I am not able to download the DTU training dataset from the gdrive link shared in the repo. It seems that the link is dead. Would it be possible for you to reactivate the link?

    Thanks, Aditya Vora

    opened by aditya-vora 2
  • ValueError when generate video

    ValueError when generate video

    Thank you for your work! There is a value error when I test the per-scene optimized Point-NeRFs. For example, run: bash dev_scripts/w_n360/chair_test.sh 7ZZgYzkIAP

    I checked the line 92 in visualizer.py and find the total_step is not a integer but a string. YcqJvls8OJ Therefore, the filename cannot be generated and will bring this error because of mismatched variable type. You can fix this bug by converting total_step to int. Thank you for your time.

    opened by YuhsiHu 2
  • [Help] Segmentation Fault

    [Help] Segmentation Fault

    (point-nerf) zzh@zzh-pc:~/Projects/NeRF/pointnerf$ bash dev_scripts/w_n360/lego_cuda.sh dev_scripts/w_n360/lego_cuda.sh: line 291: 9526 Segmentation fault python train_ft_nonstop.py --experiment $name --scan $scan --data_root $data_root --dataset_name $dataset_name --model $model --which_render_func $which_render_func --which_blend_func $which_blend_func --out_channels $out_channels --num_pos_freqs $num_pos_freqs --num_viewdir_freqs $num_viewdir_freqs --random_sample $random_sample --random_sample_size $random_sample_size --batch_size $batch_size --maximum_step $maximum_step --plr $plr --lr $lr --lr_policy $lr_policy --lr_decay_iters $lr_decay_iters --lr_decay_exp $lr_decay_exp --gpu_ids $gpu_ids --checkpoints_dir $checkpoints_dir --save_iter_freq $save_iter_freq --niter $niter --niter_decay $niter_decay --n_threads $n_threads --pin_data_in_memory $pin_data_in_memory --train_and_test $train_and_test --test_num $test_num --test_freq $test_freq --test_num_step $test_num_step --test_color_loss_items $test_color_loss_items --print_freq $print_freq --bg_color $bg_color --split $split --which_ray_generation $which_ray_generation --near_plane $near_plane --far_plane $far_plane --dir_norm $dir_norm --which_tonemap_func $which_tonemap_func --load_points $load_points --resume_dir $resume_dir --resume_iter $resume_iter --feature_init_method $feature_init_method --agg_axis_weight $agg_axis_weight --agg_distance_kernel $agg_distance_kernel --radius_limit_scale $radius_limit_scale --depth_limit_scale $depth_limit_scale --vscale $vscale --kernel_size $kernel_size --SR $SR --K $K --P $P --NN $NN --agg_feat_xyz_mode $agg_feat_xyz_mode --agg_alpha_xyz_mode $agg_alpha_xyz_mode --agg_color_xyz_mode $agg_color_xyz_mode --save_point_freq $save_point_freq --raydist_mode_unit $raydist_mode_unit --agg_dist_pers $agg_dist_pers --agg_intrp_order $agg_intrp_order --shading_feature_mlp_layer0 $shading_feature_mlp_layer0 --shading_feature_mlp_layer1 $shading_feature_mlp_layer1 --shading_feature_mlp_layer2 $shading_feature_mlp_layer2 --shading_feature_mlp_layer3 $shading_feature_mlp_layer3 --shading_feature_num $shading_feature_num --dist_xyz_freq $dist_xyz_freq --shpnt_jitter $shpnt_jitter --shading_alpha_mlp_layer $shading_alpha_mlp_layer --shading_color_mlp_layer $shading_color_mlp_layer --which_agg_model $which_agg_model --color_loss_weights $color_loss_weights --num_feat_freqs $num_feat_freqs --dist_xyz_deno $dist_xyz_deno --apply_pnt_mask $apply_pnt_mask --point_features_dim $point_features_dim --color_loss_items $color_loss_items --feedforward $feedforward --trgt_id $trgt_id --depth_vid $depth_vid --ref_vid $ref_vid --manual_depth_view $manual_depth_view --pre_d_est $pre_d_est --depth_occ $depth_occ --manual_std_depth $manual_std_depth --visual_items $visual_items --appr_feature_str0 $appr_feature_str0 --init_view_num $init_view_num --feat_grad $feat_grad --conf_grad $conf_grad --dir_grad $dir_grad --color_grad $color_grad --depth_conf_thresh $depth_conf_thresh --bgmodel $bgmodel --vox_res $vox_res --act_type $act_type --geo_cnsst_num $geo_cnsst_num --point_conf_mode $point_conf_mode --point_dir_mode $point_dir_mode --point_color_mode $point_color_mode --normview $normview --prune_thresh $prune_thresh --prune_iter $prune_iter --full_comb $full_comb --sparse_loss_weight $sparse_loss_weight --default_conf $default_conf --prob_freq $prob_freq --prob_num_step $prob_num_step --prob_thresh $prob_thresh --prob_mul $prob_mul --prob_kernel_size $prob_kernel_size --prob_tiers $prob_tiers --alpha_range $alpha_range --ranges $ranges --vid $vid --vsize $vsize --wcoord_query $wcoord_query --max_o $max_o --zero_one_loss_items $zero_one_loss_items --zero_one_loss_weights $zero_one_loss_weights --prune_max_iter $prune_max_iter --far_thresh $far_thresh --bg_filtering $bg_filtering

    opened by zzh-ecnu 1
  • How to sample point on each ray?

    How to sample point on each ray?

    Could you please give more information about how to sample points in order to make sure the sample points are near neural points? Do we need to calculate the distance between every voxels(defined by CAGQ method) and the ray, then pick the nearest voxel? Thank you!

    opened by lorettaxu 0
  • Can everyone successfully configure the environment with torch==1.8.1+cu10.2?

    Can everyone successfully configure the environment with torch==1.8.1+cu10.2?

    When I run the test code, there are several extra packages to install and I am stuck at the step of installing inplace_abn. I cannot install it by either using pip or directly run setup.py. It seems that the problem is caused by the high torch version.

    With torch==1.5.0, inplace_abn can be compiled without error.

    When it comes to 1.8.1, it will throw an error about ['ninja','-v'] Some blog article tells that we can manually change '-v' to '--version' but it will throw new errors

    g++: error: ~/inplace_abn/build/temp.linux-x86_64-cpython-37/src/inplace_abn.o: No such file or directory
    g++: error: ~/inplace_abn/build/temp.linux-x86_64-cpython-37/src/inplace_abn_cpu.o: No such file or directory
    g++: error: ~/inplace_abn/build/temp.linux-x86_64-cpython-37/src/inplace_abn_cuda.o: No such file or directory
    g++: error: ~/inplace_abn/build/temp.linux-x86_64-cpython-37/src/utils.o: No such file or directory
    error: command '/usr/bin/g++' failed with exit code 1
    

    By using torch==1.5.0, we can get these src files to subtitute these vacancies of files. But later, a new error occurs like: ImportError: ~/anaconda3/envs/pointnerf/lib/python3.7/site-packages/inplace_abn-1.1.1.dev6+gd2728c8-py3.7-linux-x86_64.egg/inplace_abn/_backend.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceISt7complexIdEEEPKNS_6detail12TypeMetaDataEv

    Then I have no ideas. Anyone who succeeded in the compilation process can help?

    opened by xrr-233 0
  • Neural Point Querying

    Neural Point Querying

    In the part H Neural Point Querying from the paper, the evenly spaced 3D grids are mentioned. And the paper says that 'Since these grids in the perspective coordinate are cubic, in the world coordinate, they have shapes of spherical voxels'. Could you please give more detailed information about the perspective coordinate? And may I ask why the grids can have shapes of spherical voxels in the world coordinate? Thank you!

    opened by lorettaxu 0
  • Question About “nearest_view” function when init Pcd from colmap

    Question About “nearest_view” function when init Pcd from colmap

    image Hi,Charlie! 谢谢你的工作,当我在使用Colmap创建的点云(或者是像scannet一样从depth中初始化点云)的时候,在train_ft.py中会调用一个nearest_view 函数,我理解的这个函数的目的是计算每个点属于哪一个view。便于后面使用query_embeding函数去提取初始特征。那么我的问题:在倒数第三行这里对“点云方向和相机中心像素点方向的内积“ 与 “点到campos的距离” 进行了一个加权处理。请问为什么这里距离的权重是1/200,您在这里是怎么考虑的呢,适合应用到什么样的数据集上面?我可不可以任意调整这个数。

    opened by MobiusLqm 1
  • COLMAP settings for point cloud reconstruction for nerf dataset

    COLMAP settings for point cloud reconstruction for nerf dataset

    Hello, thanks for sharing the code for the amazing work.

    I was curious to know the colmap settings you use to reconstruct the point cloud for the nerf dataset. Would it be possible to share the script for doing the reconstruction? I checked the links that you provided in the repo, however, it only has the models and not the script to generate those.

    Thanks, Aditya

    opened by aditya-vora 1
Owner
Qiangeng Xu
Qiangeng Xu
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF implementation. Contact Jon Barron if you encounter any issues.

Google 625 Dec 30, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

null 524 Jan 8, 2023
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 8, 2023
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 2, 2023
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

null 6.5k Jan 1, 2023
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 7, 2023
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

null 94 Nov 22, 2022
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Kai Zhang 46 Nov 20, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 9, 2023
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF ?? : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022