Official PyTorch Implementation of paper "NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting", EGSR 2021.

Related tags

Deep Learning nelf
Overview

NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting

Official PyTorch Implementation of paper "NeLF: Neural Light-transport Field for Single Portrait View Synthesis and Relighting", EGSR 2021.

Tiancheng Sun1*, Kai-En Lin1*, Sai Bi2, Zexiang Xu2, Ravi Ramamoorthi1

1University of California, San Diego, 2Adobe Research

*Equal contribution

Project Page | Paper | Pretrained models | Validation data | Rendering script

Requirements

Install required packages

Make sure you have up-to-date NVIDIA drivers supporting CUDA 11.1 (10.2 could work but need to change cudatoolkit package accordingly)

Run

conda env create -f environment.yml
conda activate pixelnerf

The following packages are used:

  • PyTorch (1.7 & 1.9.0 Tested)

  • OpenCV-Python

  • matplotlib

  • numpy

  • tqdm

OS system: Ubuntu 20.04

Download CelebAMask-HQ dataset link

  1. Download the dataset

  2. Remove background with the provided masks in the dataset

  3. Downsample the dataset to 512x512

  4. Store the resulting data in [path_to_data_directory]/CelebAMask

    Following this data structure

    [path_to_data_directory] --- data --- CelebAMask --- 0.jpg
                                       |              |- 1.jpg
                                       |              |- 2.jpg
                                       |              ...
                                       |- blender_both --- sub001
                                       |                |- sub002
                                       |                ...
    
    

(Optional) Download and render FaceScape dataset link

Due to FaceScape's license, we cannot release the full dataset. Instead, we will release our rendering script.

  1. Download the dataset

  2. Install Blender link

  3. Run rendering script link

Usage

Testing

  1. Download our pretrained checkpoint and testing data. Extract the content to [path_to_data_directory]. The data structure should look like this:

    [path_to_data_directory] --- data --- CelebAMask
                              |        |- blender_both
                              |        |- blender_view
                              |        ...
                              |- data_results --- nelf_ft
                              |- data_test --- validate_0
                                            |- validate_1
                                            |- validate_2
    
  2. In arg/__init__.py, setup data path by changing base_path

  3. Run python run_test.py nelf_ft [validation_data_name] [#iteration_for_the_model]

    e.g. python run_test.py nelf_ft validate_0 500000

  4. The results are stored in [path_to_data_directory]/data_test/[validation_data_name]/results

Training

Due to FaceScape's license, we are not allowed to release the full dataset. We will use validation data to run the following example.

  1. Download our validation data. Extract the content to [path_to_data_directory]. The data structure should look like this:

    [path_to_data_directory] --- data --- CelebAMask
                              |        |- blender_both
                              |        |- blender_view
                              |        ...
                              |- data_results --- nelf_ft
                              |- data_test --- validate_0
                                            |- validate_1
                                            |- validate_2
    

    (Optional) Run rendering script and render your own data.

    Remember to change line 35~42 and line 45, 46 in arg/config_nelf_ft.py accordingly.

  2. In arg/__init__.py, setup data path by changing base_path

  3. Run python run_train.py nelf_ft

  4. The intermediate results and model checkpoints are saved in [path_to_data_directory]/data_results/nelf_ft

Configs

The following config files can be found inside arg folder

Citation

@inproceedings {sun2021nelf,
    booktitle = {Eurographics Symposium on Rendering},
    title = {NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting},
    author = {Sun, Tiancheng and Lin, Kai-En and Bi, Sai and Xu, Zexiang and Ramamoorthi, Ravi},
    year = {2021},
}
You might also like...
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)

An Image is Worth 16x16 Words, What is a Video Worth? paper Official PyTorch Implementation Gilad Sharir, Asaf Noy, Lihi Zelnik-Manor DAMO Academy, Al

 Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Official implementation of the ICCV 2021 paper:
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Comments
  • Camera coordinates in provided data

    Camera coordinates in provided data

    Hi,

    Great work! I am performing some test on my own dataset using your pre-trained model, but it outputs a total blank. I found your camera coordinate is different from mine. Can you explain per my confusion below? Much thanks.

    I have your cameras.npz in validation_2 folder visualized here. Figure_1

    • Is your coordinate left-handed or right-handed? And does it follow OpenCV convention or OpenGL convention?
    • Is your look-at vector [0, 0, -1]?

    I appreciate to your explanation.

    opened by CorneliusHsiao 16
  • How to get source and target .hdr files ?

    How to get source and target .hdr files ?

    Firstly, congrats to the team for the interesting results. I want to test your model in real world , I have some image and mask ,but how to get .hdr files ?

    opened by kelisiya 3
  • Is it possible to input more than five views?

    Is it possible to input more than five views?

    Hi, thanks for the great work!

    If I want to input more than five views (for example, to fairly compare with works like NeRFactor), is it possible? Should I expect the results to be better than inputting five views?

    opened by gerwang 3
Owner
Ken Lin
Ken Lin
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

null 28 Aug 29, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

null 77 Dec 27, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

?? ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022