Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Related tags

Deep Learning RNW
Overview

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Kun Wang, Zhenyu Zhang, Zhiqiang Yan, Xiang Li, Baobei Xu, Jun Li and Jian Yang

PCA Lab, Nanjing University of Science and Technology; Tencent YouTu Lab; Hikvision Research Institute

Introduction

This is the official repository for Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark. You can find our paper at arxiv. In this repository, we release the training and testing code, as well as the data split files of RobotCar-Night and nuScenes-Night.

image-20211002220051137

Dependency

  • python>=3.6
  • torch>=1.7.1
  • torchvision>=0.8.2
  • mmcv>=1.3
  • pytorch-lightning>=1.4.5
  • opencv-python>=3.4
  • tqdm>=4.53

Dataset

The dataset used in our work is based on RobotCar and nuScenes. Please visit their official website to download the data (We only used a part of these datasets. If you just want to run the code, (2014-12-16-18-44-24, 2014-12-09-13-21-02) of RobotCar and (Package 01, 02, 05, 09, 10) of nuScenes is enough). To produce the ground truth depth, you can use the above official toolboxes. After preparing datasets, we strongly recommend you to organize the directory structure as follows. The split files are provided in split_files/.

RobotCar-Night root directory
|__Package name (e.g. 2014-12-16-18-44-24)
   |__depth (to store the .npy ground truth depth maps)
      |__ground truth depth files
   |__rgb (to store the .png color images)
      |__color image files
   |__intrinsic.npy (to store the camera intrinsics)
   |__test_split.txt (to store the test samples)
   |__train_split.txt (to store the train samples)
nuScenes-Night root directory
|__sequences (to store sequence data)
   |__video clip number (e.g. 00590cbfa24a430a8c274b51e1c71231)
      |__file_list.txt (to store the image file names in this video clip)
      |__intrinsic.npy (to store the camera intrinsic of this video clip)
      |__image files described in file_list.txt
|__splits (to store split files)
   |__split files with name (day/night)_(train/test)_split.txt
|__test
   |__color (to store color images for testing)
   |__gt (to store ground truth depth maps w.r.t color)

Note: You also need to configure the dataset path in datasets/common.py. The original resolution of nuScenes is too high, so we reduce its resolution to half when training.

Training

Our model is trained using Distributed Data Parallel supported by Pytorch-Lightning. You can train a RNW model on one dataset through the following two steps:

  1. Train a self-supervised model on daytime data, by

    python train.py mono2_(rc/ns)_day number_of_your_gpus
  2. Train RNW by

    python train.py rnw_(rc/ns) number_of_your_gpus

Since there is no eval split, checkpoints will be saved every two epochs.

Testing

You can run the following commands to test on RobotCar-Night

python test_robotcar_disp.py day/night config_name checkpoint_path
cd evaluation
python eval_robotcar.py day/night

To test on nuScenes-Night, you can run

python test_nuscenes_disp.py day/night config_name checkpoint_path
cd evaluation
python eval_nuscenes.py day/night

Besides, you can use the scripts batch_eval_robotcar.py and batch_eval_nuscenes.py to automatically execute the above commands.

Citation

If you find our work useful, please consider citing our paper

@InProceedings{Wang_2021_ICCV,
    author    = {Wang, Kun and Zhang, Zhenyu and Yan, Zhiqiang and Li, Xiang and Xu, Baobei and Li, Jun and Yang, Jian},
    title     = {Regularizing Nighttime Weirdness: Efficient Self-Supervised Monocular Depth Estimation in the Dark},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {16055-16064}
}
You might also like...
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Repository for
Repository for "Toward Practical Monocular Indoor Depth Estimation" (CVPR 2022)

Toward Practical Monocular Indoor Depth Estimation Cho-Ying Wu, Jialiang Wang, Michael Hall, Ulrich Neumann, Shuochen Su [arXiv] [project site] DistDe

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)
Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022) Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Uns

Comments
  • How do you select no moving object scenes?

    How do you select no moving object scenes?

    Hi, @w2kun!

    Thanks for sharing this wonderful work! In the paper, it says scenes with moving objects are ignored. So how do you manually select no moving object scenes?

    opened by agenthong 0
  • About process nuScene datasets

    About process nuScene datasets

    Hello. Thanks for your excellent work! I have two question about nuScene:

    1. As mentioned in readme, Package 01, 02, 05, 09, 10 of nuScene are used to train. Did you download keyframes or all frames in both samples and sweep?
    2. how to convert nuScene pointcloud into 2D depth? Looking forward to your reply!
    opened by Ryanuppp 1
  • about pre-process depth map

    about pre-process depth map

    Hi, about processing depth, I have two questions

    1. In script above, how _RAW_TEST_TIMESTAMPS generated, and what chunk is used for ?
    2. How to choose the _TIME_SPAN, when the value is large, some occluded pixel will appear when project accumulated point cloud to image, like the visualization I ran below ( the red house suppose to hide behind the wall, but large _TIME_SPAN will make it appear, and it is a invalid GT depth image) image @w2kun

    Originally posted by @mzy97 in https://github.com/w2kun/RNW/issues/2#issuecomment-1001391207

    opened by mzy97 2
  • about generate training/testset of nuScenes datasest

    about generate training/testset of nuScenes datasest

    HI! Thanks for your great job! I try to generate training/test set of nuScenes dataset by utilizing the official toobox and dataset (package 01,02,05,09,10). However, It seems the official toolbox can't directly generate your training setup (i.e., your directory structure, file_list.txt files, etc) Could you provide your python script file to generate the nuScene training/testing dataset?

    opened by Uk-shin 2
Owner
kunwang
kunwang
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised d

Hang 3 Oct 22, 2021
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals

LapDepth-release This repository is a Pytorch implementation of the paper "Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals" M

Minsoo Song 205 Dec 30, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 9, 2022
[CVPR 2021] Monocular depth estimation using wavelets for efficiency

Single Image Depth Prediction with Wavelet Decomposition Michaël Ramamonjisoa, Michael Firman, Jamie Watson, Vincent Lepetit and Daniyar Turmukhambeto

Niantic Labs 205 Jan 2, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 3, 2023
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
SimpleDepthEstimation - An unified codebase for NN-based monocular depth estimation methods

SimpleDepthEstimation Introduction This is an unified codebase for NN-based monocular depth estimation methods, the framework is based on detectron2 (

null 8 Dec 13, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022