Source code for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

Overview

In-the-Wild Single Camera 3D Reconstruction
Through Moving Water Surfaces

This is the PyTorch implementation for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

Project Page | Paper | Supplemental Material

In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces
Jinhui Xiong, Wolfgang Heidrich
KAUST
ICCV 2021 (Oral)

We propose a differentiable framework to estimate underwater scene geometry along with the time-varying water surface. The inputs to our model are a video sequence captured by a fixed camera. Dense correspondence from each frame to a world reference frame (selected from the input sequences) is pre-computed, ensuring the reconstruction is performed in a unified coordinate system. We feed the flow fields, together with initialized water surfaces and scene geometry (all are initialized as planar surfaces), into the framework, which incorporates ray casting, Snell’s law and multi-view triangulation. The gradients of the specially designed losses with respect to water surfaces and scene geometry are back-propagated, and all parameters are simultaneously optimized. The final result is a quality reconstruction of the underwater scene, along with an estimate of the time-varying water-air interface. The data shown here was captured in a public fountain environment.

Prerequisite

The code was tested with python>=3.7 & PyTorch>=1.3 & cuda>=10.0 on Nvidia RTX 2080 Ti
Minor change on the code if there is compatibility issue. It needs around 10 GB GPU memory.

Setup

conda create -n moving_water python=3.7
conda activate moving_water

conda install pytorch torchvision -c pytorch
conda install -c conda-forge opencv scikit-image
conda install -c anaconda scipy

Run the code

Please go to example folder, download the cached coefficient matrices (there are three matrices for each example) and execute:

python3 run.py

Citation

@inproceedings{xiong2021inthewild,
  title={In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces},
  author={Jinhui Xiong and Wolfgang Heidrich},
  year={2021},
  booktitle={ICCV}
}

Contact

Please contact Jinhui Xiong [email protected] if you have any question or comment.

You might also like...
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

Official Repository for the ICCV 2021 paper
Official Repository for the ICCV 2021 paper "PixelSynth: Generating a 3D-Consistent Experience from a Single Image"

PixelSynth: Generating a 3D-Consistent Experience from a Single Image (ICCV 2021) Chris Rockwell, David F. Fouhey, and Justin Johnson [Project Website

 Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Official implementation of the ICCV 2021 paper:
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Comments
  • How to get fine optical flow

    How to get fine optical flow

    I tried generating optical flow of example 1 using RAFT and its pretrained weights but the result was not good. Did you use pretrained model directly for optical flow or finetuned on specific date set?

    opened by JamesYang-7 1
  • How to generate cubic B-spline coefficients and confidence masks

    How to generate cubic B-spline coefficients and confidence masks

    Hi Jinhui,

    Thanks for sharing such a good paper, I have some problems after reading the paper when I checked the code:

    1. The paper has mentioned that 'cubic B-spline coefficients and confidence masks were pre-computed', I want to ask if could you give me some suggestions about how to generate them? or maybe part of them generated by random.
    2. I have checked the 'uv_com_100.npy' file, and I have found that it's 240 * 128, while in the paper it has mentioned that 1080p video will be downsampled by 8 to be 240 * 135. I want to ask why this 128 is different from the 135, maybe some extra steps I lost.

    Thanks a lot for your help Best regards, Tiancheng

    opened by GaryGuTC 1
  • Degree of the B-spline surface

    Degree of the B-spline surface

    Hi Jinhui,

    I find that you provide us with the coefficient matrices. Could you please tell us the degree of the B-spline surface (two degrees in u and v direction)? As you know, the degree of the B-spline surface is important for us to calculate the coefficient matrices.

    Cheers, Jiahao

    opened by Jiahao-Ma 1
  • Formula derivation and understanding.

    Formula derivation and understanding.

    Hi Jinhui,

    Thanks for sharing such great work! When reading your paper, there are two points confusing me.

    For the first one, if we get the surface normal, the ray of incidence, how can we compute the refracted ray based on Snell's law? In the paper you only give the results and omit the intermediate process, can you please provide more information about the derivation process? Or relevant information would be fine.

    image

    What's more, for the distance loss, why do we need to multiply refracted light twice? Is the refracted ray at this point a unit vector?

    image

    Your answers are very important to me and I look forward to hearing from you!

    Cheers, Jiahao

    opened by Jiahao-Ma 1
Owner
null
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Bo Sun 132 Nov 28, 2022
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).

Pixel Difference Convolution This repository contains the PyTorch implementation for "Pixel Difference Networks for Efficient Edge Detection" by Zhuo

Alex 236 Dec 21, 2022
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Haitao Yang 62 Dec 30, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 5, 2023
Code release for ICCV 2021 paper "Anticipative Video Transformer"

Anticipative Video Transformer Ranked first in the Action Anticipation task of the CVPR 2021 EPIC-Kitchens Challenge! (entry: AVT-FB-UT) [project page

Facebook Research 123 Dec 13, 2022
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

null 235 Dec 26, 2022
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

null 10 Mar 16, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA>=10.0,

null 29 Aug 23, 2022
Starter code for the ICCV 2021 paper, 'Detecting Invisible People'

Detecting Invisible People [ICCV 2021 Paper] [Website] Tarasha Khurana, Achal Dave, Deva Ramanan Introduction This repository contains code for Detect

Tarasha Khurana 28 Sep 16, 2022