NeROIC: Neural Object Capture and Rendering from Online Image Collections

Related tags

Deep Learning NeROIC
Overview

NeROIC: Neural Object Capture and Rendering from Online Image Collections

This repository is for the source code for the paper NeROIC: Neural Object Capture and Rendering from Online Image Collections by Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos Achlioptas, and Sergey Tulyakov.

The code is coming soon. For more information, please check out the project website.

Overview

Our two-stage model takes images of an object from different conditions as input. With the camera poses of images and object foreground masks acquired by other state-of-the-art methods, We first optimize the geometry of scanned object and refine camera poses by training a NeRF-based network; We then compute the surface normal from the geometry (represented by density function) using our normal extraction layer; Finally, our second stage model decomposes the material properties of the object and solves for the lighting conditions for each image.

Screenshot

Novel View Synthesis

Given online images from a common object, our model can synthesize novel views of the object with the lighting conditions from the training images.

nvs.mp4

Screenshot

Material Decomposition

material.mp4

Screenshot

Relighting

relighting.mp4

Citation

If you find this useful, please cite the following:

@article{kuang2021neroic,
  author = {Kuang, Zhengfei and Olszewski, Kyle and Chai, Menglei and Huang, Zeng and Achlioptas, Panos and Tulyakov, Sergey},
  title = {{NeROIC}: Neural Object Capture and Rendering from Online Image Collections},
  journal = Computing Research Repository (CoRR),
  volume = {abs/2201.02533},
  year = {2022}
}
Comments
  • Is there any way to export 3D models to run on general 3D software

    Is there any way to export 3D models to run on general 3D software

    Hello, I came here from the Instant ngp issues “export of 3D model is not clear ...". Is there any way to export clear 3D models to run on general 3D software in this paper?

    opened by liuguicen 2
  • Relighting problems

    Relighting problems

    Thanks to the authors for your contributions. Here I have some questions about relighting:

    1. When I run test_relighting.py using milkbox sample with an outdoor image (download from google), but the relighting results seem a littie bad. After checking the code, I found that the relighting results were highly dependent on the setting of args.test_env_maxthres. I am curious about how to choose the env image and set the args.test_env_maxthres to achieve the better relighting results?

    The outdoor image we used while tesing the milkbox sample: outdoor

    The relighting results: temp

    1. In section 3.5 of the paper it is introduced that the purpose of Rendering Networks is to estimate the lighting of each input image and the material properties of the object, but the self.env_lights in class NeROICRenderer keeps the value unchanged since the initialization. I am curious about how to the network learning the various illumination conditions?
    opened by Menpinland 2
  • Can NeROIC run on CPU?

    Can NeROIC run on CPU?

    I modifid train.py\ generate_normal.py\dataset.py to cpu device type。 Then I execute code python train.py \ --config configs/milkbox_geometry.yaml \ --datadir ./data/milkbox_dataset. But there not generate 'epoch=29.ckpt' file

    opened by awenhaowenchao 2
  • Image names covered in brine, custom datasets and LLFF compatibility

    Image names covered in brine, custom datasets and LLFF compatibility

    Your dataset structure follows common LLFF, and I understand that the main issue with custom datasets and LLFF is the lack of file names being presented to the model. I had similar issues with nvdiffrec and a simple list solves any number of memory leaks that happen when loading images/files rejected by colmap.

    a "view_imgs.txt" is pretty important I'd think, and I'm glad some of the example datasets use a poses.npy -I do not understand the reasoning to use remove.bg masks, construct datasets with another .db file, and pickle a list of files (instead of a readable .txt) that users might want to edit when making their own sets. https://github.com/snap-research/NeROIC/blob/e535d50077ab57c2f39ae55543c8543793857676/dataset/llff.py#L243 when I am trying to handle masks, in their own separate folder, would I want them just to be a b/w image out of any number of salient object matting repos, or images specifically from remove.bg of a separate size (to then use bicubic filtering on), only in the alpha, and containing an entire unused image?

    There are 0 technological limitations on making a dataset renderable and testable in instant-ngp, meshable in nvdiffrec (and this repo), optimizable in AdaNeRF and R2L, and still be created from a video shot on a phone. If you're planning on making a dataset creation guide, please dont use removebg filenames, dont use new db files, dont use non user-readable lists of files (it takes one extra line to parse a .txt file), and have support for traditional b/w masks. all that's needed is /images, /masks, imgs.txt, and a poses.npy (pts seems like it's to build a bounding box and isn't in all your example sets) lowering that barrier allows anyone who knows how to run a script to make datasets, and it's WHY instant-ngp worked, anyone could try it out with ffmpeg and a script. Forks are being made to test datasets made from my colmap2poses script, if a simple colmap2NeROIC script is needed to read colmap data I can make a push with a more forgiving LLFF dataloader and said script.

    opened by Sazoji 2
  • Inconsistent hyperparameters reported in the paper and configuration files

    Inconsistent hyperparameters reported in the paper and configuration files

    I found some hyperparameters in configuration files inconsistent with those reported in the original paper. For example, lambda_tr is 1 in config while 0.01 in the original paper. Which one should I refer to?

    opened by jingsenzhu 1
  • Question on Foreground Masks

    Question on Foreground Masks

    The masks in provided data seem perfect. I am working on my own data and wondering if it can work with noisy foreground mask coming from a saliency detection model or it has to be perfect foreground mask. Thanks!

    opened by jianweif 1
  • TypeError: __init__() got an unexpected keyword argument 'every_n_val_epochs'

    TypeError: __init__() got an unexpected keyword argument 'every_n_val_epochs'

    Traceback (most recent call last): File "train.py", line 341, in train() File "train.py", line 313, in train checkpoint_callback = ModelCheckpoint(dirpath=os.path.join(args.basedir, args.expname), TypeError: init() got an unexpected keyword argument 'every_n_val_epochs'

    opened by Michaelwhite34 0
  • Training stuck

    Training stuck

    Hi,

    I'm running it on my own dataset using two GPUs and it stuck like this: image

    Any suggestions?

    btw, here https://github.com/snap-research/NeROIC/tree/master/scripts it shouldn't be

    cd utils/data_preproccess
    

    It should be

    cd scripts
    

    instead.

    opened by SizheAn 0
  • Is there any model files like obj files generated during training procedure or test procedure?

    Is there any model files like obj files generated during training procedure or test procedure?

    Thank you for this great work!

    I am wondering whether there are any model files like obj files generated during training procedure or test procedure. I just find some png files and mp4 files in logs and results directory respectively. I have executed all steps of Quick Start and Testing in READEME.md.

    Your help is greatly appreciated!

    opened by Joe618 0
  • Help! While running the code, an error occurred.

    Help! While running the code, an error occurred.

    In the process of running the code 'python train.py --config configs/milkbox_geometry.yaml --datadir ./data/milkbox_dataset', an error occurs, prompting 'AttributeError: module 'keras.backend' has no attribute 'is_tensor' ' Error, please, how do I fix this? Thanks!

    opened by sdasdasddaaaa 1
  • Results not as good on custom dataset

    Results not as good on custom dataset

    Hello! I have been training the network on my own data and the rendering results are way worse than the ones from training on the provided datasets. I trained on a dataset that has 53 images. I kept the same parameters for the config files, changing the image size and name, of course.

    One of the input images: 2 And these are the result images taken from the logs folder: 00 train_01 Visualizing the dense output from colmap: Screenshot from 2022-09-08 11-26-43

    What could be the possible reasons for such results?

    opened by anna-debug 1
  • Perform baddly on validation set

    Perform baddly on validation set

    I use the milkbox dataset to train the model and the training PSNR is about 29. However, the images synthesis in validation are nearly blank with a little noise, and the PSNR is extremely low. Is it overfitting?

    I just run like this for the geometry model:

    python train.py
    --config
    configs/milkbox_geometry.yaml
    --datadir
    data\milkbox_dataset
    
    opened by Choconuts 6
Owner
Snap Research
Snap Research
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 9, 2023
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 81 Sep 25, 2021
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 69 Nov 28, 2022
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 1.3k Jan 7, 2023
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

Ruiqi Zhong 42 Nov 3, 2022
A Blender python script for getting asset browser custom preview images for objects and collections.

asset_snapshot A Blender python script for getting asset browser custom preview images for objects and collections. Installation: Click the code butto

Johnny Matthews 44 Nov 29, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
A curated list of neural rendering resources.

Awesome-of-Neural-Rendering A curated list of neural rendering and related resources. Please feel free to pull requests or open an issue to add papers

Zhiwei ZHANG 43 Dec 9, 2022
Volsdf - Volume Rendering of Neural Implicit Surfaces

Volume Rendering of Neural Implicit Surfaces Project Page | Paper | Data This re

Lior Yariv 221 Jan 7, 2023
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers

ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers Official implementation of ViewFormer. ViewFormer is a NeRF-free neural rend

Jonáš Kulhánek 169 Dec 30, 2022
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022