[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

Overview

NeRFlow

[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

Datasets

The pouring dataset used for experiments can be download here and the iGibson dataset used in experiments can be downloaded here

Pouring Dataset

Please download and extract each dataset at data/nerf_synthetic/. Please use the following command to train

python run_nerf.py --config=configs/pour_baseline.txt

After running model for 200,000 iterations, move the model to a new folder pour_dataset_flow and then use the following command to train with flow consistency

python run_nerf.py --config=configs/pour_baseline_flow.txt

Gibson Dataset

Please download and extract each dataset at data/nerf_synthetic/. Please use the following command to train the model

python run_nerf.py --config=configs/gibson_baseline.txt

After running model for 200,000 iterations, move the model to a new folder pour_dataset_flow and then use the following command to train with flow consistency

python run_nerf.py --config=configs/gibson_baseline_flow.txt
You might also like...
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Official code release for
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

GRAF This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. You can find detailed usage i

 MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

 Out-of-boundary View Synthesis towards Full-frame Video Stabilization
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Dynamic View Synthesis from Dynamic Monocular Video
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Just Go with the Flow: Self-Supervised Scene Flow Estimation

Just Go with the Flow: Self-Supervised Scene Flow Estimation Code release for the paper Just Go with the Flow: Self-Supervised Scene Flow Estimation,

Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

Comments
  • Dataset understanding

    Dataset understanding

    In the pouring dataset /nerf_synthetic/test, there's a mix of r_x.png and r_x_flow.png (where x is the index), should the r_x_flow.png be interpreted as the optical flow map? How are they generated? Where can I find a script to generate them? In the Gibson dataset, /gibson_dataset/gibson_test_100_smaller_range_0_6_0_0_0_4, there are 5 kinds of data: d_x_flow.npy, d_x.npy, flow_x.npy, r_x.png, r_x_flow.png. Again how to interpret them and how to compute them based on the raw data (which I assume are RGB and depth images)? Where can I find a script to generate them? Also what is the data in the render_linear folder?

    TL; DR: What is the structure of each dataset? How to interpret the files? Are there scripts to generate them? Thanks in advance!

    opened by Sujie1528 9
  • How to test the model? And why flow training is too slow?

    How to test the model? And why flow training is too slow?

    How to input parameters for testing after training the radiation model? And why is the training of flow model so slow? Training the flow model on 3090Ti only ran less than 10000 steps a day.

    opened by elegyforyou 5
  • Training is too slow.

    Training is too slow.

    hello,I have started training.But there is a problem, the model training is very slow. The first command ran for 20 hours, and the second stream consistency training also ran for more than 20 hours to complete 50% of the progress, and my GPU utilization was only 3%. My configuration is a single Nvidia 3090 Ti GPU. I didn't modify your code, have you ever encountered this situation?

    opened by zih1998 1
  • can't match the file dir in pouring dataset

    can't match the file dir in pouring dataset

    I feel sorry to bother you again. I just finish the first phase of training with your sincerely help and move on to the next command which is python run_nerf.py --config=configs/pour_baseline_flow.txt and got FileNotFoundError: No such file: '/nerflow/data/nerf_synthetic/pouring_dataset/depth_train/r_1_flow0068.hdr'. I look into transforms_train.json and find out that item "depth_train_path" seems give the filename that doesn't match with the filename in depth_train folder actually. I just add some screenshot in case I didn't describe the problem clearly. image image

    opened by zhywanna 13
Owner
null
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

null 163 Dec 14, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

null 111 Dec 29, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

null 331 Dec 28, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

NerfingMVS Project Page | Paper | Video | Data NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo Yi Wei, Shaohui

Yi Wei 369 Dec 24, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022