Density-aware Single Image De-raining using a Multi-stream Dense Network (CVPR 2018)

Overview

DID-MDN

Density-aware Single Image De-raining using a Multi-stream Dense Network

He Zhang, Vishal M. Patel

[Paper Link] (CVPR'18)

We present a novel density-aware multi-stream densely connected convolutional neural network-based algorithm, called DID-MDN, for joint rain density estimation and de-raining. The proposed method enables the network itself to automatically determine the rain-density information and then efficiently remove the corresponding rain-streaks guided by the estimated rain-density label. To better characterize rain-streaks with dif- ferent scales and shapes, a multi-stream densely connected de-raining network is proposed which efficiently leverages features from different scales. Furthermore, a new dataset containing images with rain-density labels is created and used to train the proposed density-aware network.

@inproceedings{derain_zhang_2018,		
  title={Density-aware Single Image De-raining using a Multi-stream Dense Network},
  author={Zhang, He and Patel, Vishal M},
  booktitle={CVPR},
  year={2018}
} 

Prerequisites:

  1. Linux
  2. Python 2 or 3
  3. CPU or NVIDIA GPU + CUDA CuDNN (CUDA 8.0)

Installation:

  1. Install PyTorch and dependencies from http://pytorch.org (Ubuntu+Python2.7) (conda install pytorch torchvision -c pytorch)

  2. Install Torch vision from the source. (git clone https://github.com/pytorch/vision cd vision python setup.py install)

  3. Install python package: numpy, scipy, PIL, pdb

Demo using pre-trained model

python test.py --dataroot ./facades/github --valDataroot ./facades/github --netG ./pre_trained/netG_epoch_9.pth   

Pre-trained model can be downloaded at (put it in the folder 'pre_trained'): https://drive.google.com/drive/folders/1VRUkemynOwWH70bX9FXL4KMWa4s_PSg2?usp=sharing

Pre-trained density-aware model can be downloaded at (Put it in the folder 'classification'): https://drive.google.com/drive/folders/1-G86JTvv7o1iTyfB2YZAQTEHDtSlEUKk?usp=sharing

Pre-trained residule-aware model can be downloaded at (Put it in the folder 'residual_heavy'): https://drive.google.com/drive/folders/1bomrCJ66QVnh-WduLuGQhBC-aSWJxPmI?usp=sharing

Training (Density-aware Deraining network using GT label)

python derain_train_2018.py  --dataroot ./facades/DID-MDN-training/Rain_Medium/train2018new  --valDataroot ./facades/github --exp ./check --netG ./pre_trained/netG_epoch_9.pth.
Make sure you download the training sample and put in the right folder

Density-estimation Training (rain-density classifier)

python train_rain_class.py  --dataroot ./facades/DID-MDN-training/Rain_Medium/train2018new  --exp ./check_class	

Testing

python demo.py --dataroot ./your_dataroot --valDataroot ./your_dataroot --netG ./pre_trained/netG_epoch_9.pth   

Reproduce

To reproduce the quantitative results shown in the paper, please save both generated and target using python demo.py into the .png format and then test using offline tool such as the PNSR and SSIM measurement in Python or Matlab. In addition, please use netG.train() for testing since the batch for training is 1.

Dataset

Training (heavy, medium, light) and testing (TestA and Test B) data can be downloaded at the following link: https://drive.google.com/file/d/1cMXWICiblTsRl1zjN8FizF5hXOpVOJz4/view?usp=sharing

License

Code is under MIT license.

Acknowledgments

Great thanks for the insight discussion with Vishwanath Sindagi and help from Hang Zhang

Comments
  • Exception: Check dataroot

    Exception: Check dataroot

    When I lode the dataset,it raised Exception: Check dataroot.I changed the path in pix2pix_class.py and the dataroot in the command,but it still didn't work. image

    opened by Quant132 2
  • About test image size

    About test image size

    Hi, @hezhangsprinter . When I use the code to test my test dataset, it seems to resize the image to 512512. I tried to modify the orignalSize and imageSize of inputs, while there occurred some errors。I want to figure out whether the code can only tackle with image size of 512512, and the other size of image must resize/crop to 512*512. Because I want to compare your work with ours, I must test the other images. I am looking forward to your answer, thanks a lot.

    opened by chenmeiya 2
  • File

    File "train_rain_class.py", NameError: name 'optimizerD' is not defined

    when I run "python train_rain_class.py --dataroot ./facades/DID-MDN-training/Rain_Medium/train2018new --exp ./check_class" got this error. And I look into the train_rain_class.py. And I didn't find the definition of 'optimizerD'.

    opened by penguinbing 2
  • RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58

    RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58

    Hello, thanks for your work first. When I run test.py, there is an error RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58 in File "test.py", line 215, in output = residue_net(val_inputv, label_cpu)

    How can I handle this problem?

    opened by SoarAnyway 2
  • Can't run the 'test.py' file with pre_trained model

    Can't run the 'test.py' file with pre_trained model

    Hi there. I was following the Readme file to setup the environment, including ubuntu+python2.7 etc. And I have downloaded all pretrained models and put them in the corresponding directory. But when I tried to run the test.py file using the command python test.py --dataroot ./facades/github --valDataroot ./facades/github --netG ./pre_trained/netG_epoch_9.pth, an error occurred and said TypeError: __init__() got an unexpected keyword argument 'pretrained' Can you or anyone help me out? Really appreciate it!

    opened by AllenIrving 1
  • Can I use my dataset(without density-labels) to train DID-MDN?

    Can I use my dataset(without density-labels) to train DID-MDN?

    Hi! I want to retrain DID-MDN using my dataset (without density-labels). I hope to only retrain multi-stream dense network instread of the whole network. Thank you very much!

    opened by EvanLiu1 1
  • KeyError: 'unexpected key

    KeyError: 'unexpected key "conv_refin.weight" in state_dict'

    When i run test.py ,the error came out:

    Traceback (most recent call last): File "test.py", line 156, in net_label.load_state_dict(torch.load('./classification/netG_epoch_9.pth')) File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 522, in load_state_dict.format(name)) KeyError: 'unexpected key "conv_refin.weight" in state_dict'.

    How can I solve the problem?

    opened by 351137353 1
  • how training  light、medium、heavy rain?

    how training light、medium、heavy rain?

    Thanks for sharing for your code. I have one problem: the readme show this method can process light、medium、heavy rain, but the training example is only Rain_Medium,if i want to train light、medium、heavy rain,Do I python derain_train_2018.py --dataroot ./facades/DID-MDN-training/Rain_Light/train2018new --valDataroot ./facades/github --exp ./check --netG ./pre_trained/netG_epoch_9.pth. python derain_train_2018.py --dataroot ./facades/DID-MDN-training/Rain_Heavy/train2018new --valDataroot ./facades/github --exp ./check --netG ./pre_trained/netG_epoch_9.pth. python derain_train_2018.py --dataroot ./facades/DID-MDN-training/Rain_Medium/train2018new --valDataroot ./facades/github --exp ./check --netG ./pre_trained/netG_epoch_9.pth. ? waiting for your answers.thanks very much!

    opened by 351137353 1
  • One problem

    One problem

    I am sorry to I am not understand the code. In derain_dense.py, I find that the 'Dense_base_down0' code is: x11 = self.upsample(self.relu((self.conv11(x1))), size=shape_out) x21 = self.upsample(self.relu((self.conv11(x1))), size=shape_out) x31 = self.upsample(self.relu((self.conv11(x1))), size=shape_out) x41 = self.upsample(self.relu((self.conv11(x1))), size=shape_out) x51 = self.upsample(self.relu((self.conv11(x1))), size=shape_out) why did you do the same operation? Does it conv to x1,x2,x3,x4,x5??

    cheers

    opened by qingsenyangit 1
  • Train residue extraction network from scratch

    Train residue extraction network from scratch

    Guidance from README.md

    python derain_train_2018.py  --dataroot ./facades/DID-MDN-training/Rain_Medium/train2018new  --valDataroot ./facades/github --exp ./check --netG ./pre_trained/netG_epoch_9.pth.
    Make sure you download the training sample and put in the right folder
    

    Code Snippet in derain_train_2018.py

    netG=net.Dense_rain()
    
    if opt.netG != '':
      netG.load_state_dict(torch.load(opt.netG))
    print(netG)
    

    Why fine-tune on top of a pre-trained model?

    opened by biubiubiiu 0
  • Inconsistency of code and paper

    Inconsistency of code and paper

    from test.py, it seems that there three sub-networks (Dense_rain_residual, vgg19ca, Dense_rain) are actually used, which are completely different architecture. It was never mentioned in the paper.

    opened by biubiubiiu 0
  • Missing necessary code for reproduction

    Missing necessary code for reproduction

    Seemingly, there is three training stage according to your paper:

    • Stage 1: training residual extraction network only
    • Stage 2: training density classification network based on residue output from the network in Stage 1
    • Stage 3: joint training of both networks

    quote: "To efficiently train the classifier, a two-stage training protocol is leveraged. A residual feature extraction network is firstly trained to estimate the residual part of the given rainy image, then a classification sub-network is trained using the estimated residual as the input and is optimized via the ground truth labels (rain-density). Finally, the two stages (feature extraction and classification) are jointly optimized."

    It seems that the code of Stage 1 corresponds to file derain_residual.py, and Stage 3 corresponds to file train_rain_class.py (which directly loads two pre-trained models). How to get the density classifier trained in Stage 2?

    opened by biubiubiiu 0
  • Residual extraction network is never updated in joint training stage

    Residual extraction network is never updated in joint training stage

    Code snippet from train_rain_class.py:

    netG = net1.vgg19ca()
    residue_net = net2.Dense_rain_residual()
    # ...
    optimizerG = optim.Adam(netG.parameters(), ...)
    # ...
    optmizerG.step()
    

    It seems that train_rain_class.py corresponds to the joint optimization stage in your paper, but only the density classifier network is updated here. Is it the expected behavior?

    BTW, there is an undefined variable optimizer_D in line 156. I'm not sure what it's for.

    opened by biubiubiiu 0
Owner
He Zhang
Research Scientist@Adobe, Phd in Computer Vision, Deep Learning
He Zhang
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 7, 2023
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

Junyong Lee 173 Dec 30, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 6, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 4, 2023
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Mikaela Uy 294 Dec 12, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
Estimation of human density in a closed space using deep learning.

Siemens HOLLZOF challenge - Human Density Estimation Add project description here. Installing Dependencies: Install Python3 either system-wide, user-w

null 3 Aug 8, 2021
MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Felix Wimbauer 494 Jan 6, 2023
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

null 9 Dec 21, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022