Multi-Scale Progressive Fusion Network for Single Image Deraining

Related tags

Deep Learning MSPFN
Overview

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN)

This is an implementation of the MSPFN model proposed in the paper (Multi-Scale Progressive Fusion Network for Single Image Deraining) with TensorFlow.

Requirements

  • Python 3
  • TensorFlow 1.12.0
  • OpenCV
  • tqdm
  • glob
  • sys

Motivation

The repetitive samples of rain streaks in a rain image as well as its multi-scale versions (multi-scale pyramid images) may carry complementary information (e.g., similar appearance) to characterize target rain streaks. We explore the multi-scale representation from input image scales and deep neural network representations in a unified framework, and propose a multi-scale progressive fusion network (MSPFN) to exploit the correlated information of rain streaks across scales for single image deraining.

Usage

I. Train the MSPFN model

Dataset Organization Form

If you prepare your own dataset, please follow the following form: |--train_data

|--rainysamples  
    |--file1
            :  
    |--file2
        :
    |--filen
    
|--clean samples
    |--file1
            :  
    |--file2
        :
    |--filen

Then you can produce the corresponding '.npy' in the '/train_data/npy' file.

$ python preprocessing.py

Training

Download training dataset ((raw images)Baidu Cloud, (Password:4qnh) (.npy)Baidu Cloud, (Password:gd2s)), or prepare your own dataset like above form.

Run the following commands:

cd ./model
python train_MSPFN.py 

II. Test the MSPFN model

Quick Test With the Raw Model (TEST_MSPFN_M17N1.PY)

Download the pretrained models (Baidu Cloud, (Password:u5v6)) (Google Drive).

Download the commonly used testing rain dataset (R100H, R100L, TEST100, TEST1200, TEST2800) (Google Drive), and the test samples and the labels of joint tasks form (BDD350, COCO350, BDD150) (Baidu Cloud, (Password:0e7o)). In addition, the test results of other competing models can be downloaded from here (TEST1200, TEST100, R100H, R100L).

Run the following commands:

cd ./model/test
python test_MSPFN.py

The deraining results will be in './test/test_data/MSPFN'. We only provide the baseline for comparison. There exists the gap (0.1-0.2db) between the provided model and the reported values in the paper, which originates in the subsequent fine-tuning of hyperparameters, training processes and constraints.

Test the Retraining Model With Your Own Dataset (TEST_MSPFN.PY)

Download the pre-trained models.

Put your dataset in './test/test_data/'.

Run the following commands:

cd ./model/test
python test_MSPFN.py

The deraining results will be in './test/test_data/MSPFN'.

Citation

@InProceedings{Kui_2020_CVPR,
	author = {Jiang, Kui and Wang, Zhongyuan and Yi, Peng and Chen, Chen and Huang, Baojin and Luo, Yimin and Ma, Jiayi and Jiang, Junjun},
	title = {Multi-Scale Progressive Fusion Network for Single Image Deraining},
	booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
	month = {June},
	year = {2020}
}
@ARTICLE{9294056,
  author={K. {Jiang} and Z. {Wang} and P. {Yi} and C. {Chen} and Z. {Han} and T. {Lu} and B. {Huang} and J. {Jiang}},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  title={Decomposition Makes Better Rain Removal: An Improved Attention-guided Deraining Network}, 
  year={2020},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TCSVT.2020.3044887}}
Comments
  • Hello! About Dataset...

    Hello! About Dataset...

    The original Rain1400 dataset consist of 12600 training samples and 1400 test samples. Referring to table 1 in your paper, it appears that 1,400 images were taken out of the training samples and added to the test samples. What is the criteria for choosing 1400 images?

    opened by by5747 5
  • test model and train model are different?

    test model and train model are different?

    I carefully checked the generator function in the MSPFN.py file and the generator function in the TEST_MSPFN.py file, and I found that they are different. Does it mean that the model trained in the training set is not the same as the model in the test set? E.g In TEST_MSPFN.py, line 183, with tf.variable_scope('BCM_{}'.format(n)): But there is no BCM_{} in MSPFN, the corresponding is with tf.variable_scope('URAB_{}'.format(n)) in MSPFN:

    opened by Stonebobo 4
  • About Synthesis Rain

    About Synthesis Rain

    Hello, authors! I would like to know how you synthesis the rain for task-driven datasets. In the paper, you mention that you "create three new synthesis rain datasets ... through Photoshop". Could you provide a link for how you made those rain streaks? Thanks in advance.

    opened by ShenZheng2000 3
  • How can I download the pretrained models for testing my datasheet

    How can I download the pretrained models for testing my datasheet

    Hi, I want to ask some questions. Firstly, I just want to test my datasheet, and how can I download the pretrained models Secondly, Is the Tensorflow version is 1.12 Thanks a lot

    opened by 617778859 2
  • downloading pretrain models

    downloading pretrain models

    Hi kuihua, firstly thanks for sharing your work!

    I was trying to download pretrained model and other stuff through Baidu's cloud but I wasn't able to succeed in both Linux and Mac. Is there any chance for you to upload the models and datasets in different platform such as google drive?

    Thank you.

    opened by ejhung 1
  • I have a few questions!

    I have a few questions!

    Thank you for sharing your work. In your paper,

    1. Why did you use conv-lstm instead of norma lstm ?
    2. At 1/4 scale, the FFM module seems to just concat. What was the purpose of using it? Is it just for consistency?

    Thanks.

    opened by seonghyun0108 0
  • How can you retrain DIDMDN on your own dataset? It seems to be impossible.

    How can you retrain DIDMDN on your own dataset? It seems to be impossible.

    Thank you for sharing your work. In your paper, you train all methods on your own dateset to make a fair comparison. The method of DIDMDN need the label of density, I want to know how to retrain DIDMDN on your own dataset? Thank you very much, looking forward to your reply.

    opened by MC-E 0
  • About evaluation metric code of COCO/BDD

    About evaluation metric code of COCO/BDD

    Dear Author, Thank your for sharing the code and database. I am not so familiar with the detection and segmentation area, thus I am wondering if you could share the link or the code of the evaluation metric on detection/segmentation results you used in the paper. I would appreciate your help! Best,

    opened by DuanHuiyu 1
  • 使用.npy数据训练时出现问题

    使用.npy数据训练时出现问题

    File "train_MSPFN.py", line 40, in train x_train, x_test, x_train_rain, x_test_rain = load_rain.load() File "/home/ubuntu/桌面/MSPFN-master/model/load_rain.py", line 6, in load x_train_rain = np.load('../model/train_data/npy/train_rain.npy') File "/home/ubuntu/anaconda3/envs/WeY/lib/python3.6/site-packages/numpy/lib/npyio.py", line 440, in load pickle_kwargs=pickle_kwargs) File "/home/ubuntu/anaconda3/envs/WeY/lib/python3.6/site-packages/numpy/lib/format.py", line 771, in read_array array.shape = shape ValueError: cannot reshape array of size 496291712 into shape (137609,96,96,3) 这个问题如何解决呢?是什么原因造成的呢?谢谢,期待您的回复!我使用的是网址给的.npy数据

    opened by PassionYezi 0
  • 关于测试结果

    关于测试结果

    您好,最近看了您的论文非常感兴趣。我用Readme里提到的预训练模型进行测试,用的是epoch44测试的结果。我用PreNet提供的指标计算代码在Rain100H上,您的实验指标仅有 psnr=28.2350, ssim=0.8506,PreNet psnr是可以达到29以上的,想请问下这是什么原因造成的?谢谢!

    opened by whyandbecause 2
  • About SSIM and PSNR

    About SSIM and PSNR

    Hello, thank you for sharing this wonderful work. I have a question. How are the metrics of PSNR and SSIM are computed? And the value of the corresponding parameter is? Thanks.

    opened by yunhengzi 3
Owner
Kuijiang
I am a PhD, and currently work at the National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University.
Kuijiang
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

null 9 Dec 21, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 6, 2023
Multi-Stage Progressive Image Restoration

Multi-Stage Progressive Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Sh

Syed Waqas Zamir 859 Dec 22, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 3, 2023
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
[AAAI 2021] MVFNet: Multi-View Fusion Network for Efficient Video Recognition

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

Wenhao Wu 114 Nov 27, 2022
MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021)

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

null 2 Jan 29, 2022
Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

FeiyiFANG 5 Dec 13, 2021
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion". Paper link: https://arxiv.org/abs/2111.10

Ziyao Zeng 10 Nov 24, 2021
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
Code for Referring Image Segmentation via Cross-Modal Progressive Comprehension, CVPR2020.

CMPC-Refseg Code of our CVPR 2020 paper Referring Image Segmentation via Cross-Modal Progressive Comprehension. Shaofei Huang*, Tianrui Hui*, Si Liu,

spyflying 55 Dec 1, 2022
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 4, 2023
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

null 7 Feb 10, 2022
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 4, 2023
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022