RGB-D Local Implicit Function for Depth Completion of Transparent Objects

Overview

RGB-D Local Implicit Function for Depth Completion of Transparent Objects

[Project Page] [Paper]

Overview

This repository maintains the official implementation of our CVPR 2021 paper:

RGB-D Local Implicit Function for Depth Completion of Transparent Objects

By Luyang Zhu, Arsalan Mousavian, Yu Xiang, Hammad Mazhar, Jozef van Eenbergen, Shoubhik Debnath, Dieter Fox

Requirements

The code has been tested on the following system:

  • Ubuntu 18.04
  • Nvidia GPU (4 Tesla V100 32GB GPUs) and CUDA 10.2
  • python 3.7
  • pytorch 1.6.0

Installation

Docker (Recommended)

We provide a Dockerfile for building a container to run our code. More details about GPU accelerated Docker containers can be found here.

Local Installation

We recommend creating a new conda environment for a clean installation of the dependencies.

conda create --name lidf python=3.7
conda activate lidf

Make sure CUDA 10.2 is your default cuda. If your CUDA 10.2 is installed in /usr/local/cuda-10.2, add the following lines to your ~/.bashrc and run source ~/.bashrc:

export PATH=$PATH:/usr/local/cuda-10.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/lib64
export CPATH=$CPATH:/usr/local/cuda-10.2/include

Install libopenexr-dev

sudo apt-get update && sudo apt-get install libopenexr-dev

Install dependencies, we use ${REPO_ROOT_DIR} to represent the working directory of this repo.

cd ${REPO_ROOT_DIR}
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Dataset Preparation

ClearGrasp Dataset

ClearGrasp can be downloaded at their official website (Both training and testing dataset are needed). After you download zip files and unzip them on your local machine, the folder structure should be like

${DATASET_ROOT_DIR}
├── cleargrasp
│   ├── cleargrasp-dataset-train
│   ├── cleargrasp-dataset-test-val

Omniverse Object Dataset

Omniverse Object Dataset can be downloaded here. After you download zip files and unzip them on your local machine, the folder structure should be like

${DATASET_ROOT_DIR}
├── omniverse
│   ├── train
│   │	├── 20200904
│   │	├── 20200910

Soft link dataset

cd ${REPO_ROOT_DIR}
ln -s ${DATASET_ROOT_DIR}/cleargrasp datasets/cleargrasp
ln -s ${DATASET_ROOT_DIR}/omniverse datasets/omniverse

Testing

We provide pretrained checkpoints at the Google Drive. After you download the file, please unzip and copy the checkpoints folder under ${REPO_ROOT_DIR}.

Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

# To test first stage model (LIDF), use the following line
cfg_paths=experiments/implicit_depth/test_lidf.yaml
# To test second stage model (refinement model), use the following line
cfg_paths=experiments/implicit_depth/test_refine.yaml

After that, run the testing code:

cd src
bash experiments/implicit_depth/run.sh

Training

First stage model (LIDF)

Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_lidf.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

Second stage model (refinement model)

In ${REPO_ROOT_DIR}/src/experiments/implicit_depth/train_refine.yaml, set lidf_ckpt_path to the path of the best checkpoint in the first stage training. Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_refine.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

Second stage model (refinement model) with hard negative mining

In ${REPO_ROOT_DIR}/src/experiments/implicit_depth/train_refine_hardneg.yaml, set lidf_ckpt_path to the path of the best checkpoint in the first stage training, set checkpoint_path to the path of the best checkpoint in the second stage training. Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_refine_hardneg.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

License

This work is licensed under NVIDIA Source Code License - Non-commercial.

Citation

If you use this code for your research, please citing our work:

@inproceedings{zhu2021rgbd,
author    = {Luyang Zhu and Arsalan Mousavian and Yu Xiang and Hammad Mazhar and Jozef van Eenbergen and Shoubhik Debnath and Dieter Fox},
title     = {RGB-D Local Implicit Function for Depth Completion of Transparent Objects},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year      = {2021}
}
You might also like...
Current state of supervised and unsupervised depth completion methods
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals

LapDepth-release This repository is a Pytorch implementation of the paper "Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals" M

 Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)
Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)

Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021) Kranti Kumar Parida, Siddharth Srivastava, Gaurav Sharma. We address the pr

Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Data-depth-inference - Data depth inference with python
Data-depth-inference - Data depth inference with python

Welcome! This readme will guide you through the use of the code in this reposito

Owner
NVIDIA Research Projects
NVIDIA Research Projects
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 4, 2021
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Learning Continuous Image Representation with Local Implicit Image Function

LIIF This repository contains the official implementation for LIIF introduced in the following paper: Learning Continuous Image Representation with Lo

Yinbo Chen 1k Dec 25, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 6, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
PN-Net a neural field-based framework for depth estimation from single-view RGB images.

PN-Net We present a neural field-based framework for depth estimation from single-view RGB images. Rather than representing a 2D depth map as a single

null 1 Oct 2, 2021
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

null 114 Nov 30, 2022
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

null 232 Dec 25, 2022
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

ETHZ V4RL 70 Dec 22, 2022