Code for "Modeling Indirect Illumination for Inverse Rendering", CVPR 2022

Overview

Modeling Indirect Illumination for Inverse Rendering

Project Page | Paper | Data

Preparation

  • Set up the python environment
conda create -n invrender python=3.7
conda activate invrender

pip install -r requirement.txt

Run the code

Training

Taking the scene hotdog as an example, the training process is as follows.

  1. Optimize geometry and outgoing radiance field from multi-view images. (Same as IDR)

    cd code
    python training/exp_runner.py --conf confs_sg/default.conf \
                                  --data_split_dir ../Synthetic4Relight/hotdog \
                                  --expname hotdog \
                                  --trainstage IDR \
                                  --gpu 1
  2. Draw sample rays above surface points to train the indirect illumination and visibility MLP.

    python training/exp_runner.py --conf confs_sg/default.conf \
                                  --data_split_dir ../Synthetic4Relight/hotdog \
                                  --expname hotdog \
                                  --trainstage Illum \
                                  --gpu 1
  3. Jointly optimize diffuse albedo, roughness and direct illumination.

    python training/exp_runner.py --conf confs_sg/default.conf \
                                  --data_split_dir ../Synthetic4Relight/hotdog \
                                  --expname hotdog \
                                  --trainstage Material \
                                  --gpu 1

Relighting

  • Generate videos under novel illumination.

    python scripts/relight.py --conf confs_sg/default.conf \
                              --data_split_dir ../Synthetic4Relight/hotdog \
                              --expname hotdog \
                              --timestamp latest \
                              --gpu 1

Citation

@inproceedings{zhang2022invrender,
  title={Modeling Indirect Illumination for Inverse Rendering},
  author={Zhang, Yuanqing and Sun, Jiaming and He, Xingyi and Fu, Huan and Jia, Rongfei and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2022}
}

Acknowledgements: part of our code is inherited from IDR and PhySG. We are grateful to the authors for releasing their code.

Comments
  • How can I calculate final rendered image of Material Model?

    How can I calculate final rendered image of Material Model?

    Hello,

    I want calculate final rendered image of Material Model, So I refered to the code below, I calculated final rendered image like it https://github.com/zju3dv/InvRender/blob/d9e13d8e5337e4df363238fddf90f2038e792e7c/code/model/loss.py#L101 Is it correct?

    Thank you

    opened by JiyouSeo 0
  • Quantitative results.

    Quantitative results.

    Thanks for your inspired work.

    I have finished the training of whole framework. I want to evaluate the model for reproducing the quantitative results reported in paper. Could you provide the test code for evaluation?

    opened by LZleejean 0
  • Real-captured datasets

    Real-captured datasets

    First, thank the authors for their impressive and inspiring work! When reading the paper, i noticed that the paper shows some examples of the relighting results of real-captured data, could the authors kindly release the dataset used, or the method that can be used to make this dataset? (the only released data is the synthesized data, and if the authors can release the scripts of rendering the synthesized data, it would be perfect). Thanks again!

    opened by 78ij 0
  • In Eq 3, how does approximation hold?

    In Eq 3, how does approximation hold?

    Thanks for the excellent work! In the left side of eq3, visibility is dependent on the light direction. But in the right side, a light-direction-independent constant lambda is multiplied with mu. It is known that visibility differs from light directions. But the Eq3 makes visibility apply in all directions of the Gaussian. Is it a proper approximation?

    opened by yihua7 0
  • How to convert these outputs for use in the rendering engine like blender or UE?

    How to convert these outputs for use in the rendering engine like blender or UE?

    Thanks for your great work! Now I get the output mesh(.obj/.ply), normal(.png) from one perspective, can you provide some suggestions for convert these files to assets in blender or UE?

    opened by EricStuart 0
Owner
ZJU3DV
ZJU3DV is a research group of State Key Lab of CAD&CG, Zhejiang University. We focus on the research of 3D computer vision, SLAM and AR.
ZJU3DV
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 37 Nov 27, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

?? Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Race Wang 57 Dec 27, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 7, 2023
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

NVIDIA Research Projects 1.4k Jan 1, 2023
Open-source code for Generic Grouping Network (GGN, CVPR 2022)

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity Pytorch implementation for "Open-World Instance Segmen

Meta Research 99 Dec 6, 2022
Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral

News 05/10/2022 To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COL

ZJU3DV 365 Dec 30, 2022
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

null 182 Dec 23, 2022
The Pytorch code of "Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification", CVPR 2022 (Oral).

DeepBDC for few-shot learning        Introduction In this repo, we provide the implementation of the following paper: "Joint Distribution Matters: Dee

FeiLong 116 Dec 19, 2022
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

null 82 Jan 4, 2023
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
[CVPR 2022] Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement

Back To Reality: Weak-supervised 3D Object Detection with Shape-guided Label Enhancement Announcement ?? We have not tested the code yet. We will fini

Xiuwei Xu 7 Oct 30, 2022
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022