MIMO-UNet - Official Pytorch Implementation

Overview

MIMO-UNet - Official Pytorch Implementation

PWC PWC

This repository provides the official PyTorch implementation of the following paper:

Rethinking Coarse-to-Fine Approach in Single Image Deblurring

Sung-Jin Cho *, Seo-Won Ji *, Jun-Pyo Hong, Seung-Won Jung, Sung-Jea Ko

In ICCV 2021. (* indicates equal contribution)

Paper: https://arxiv.org/abs/2108.05054

Abstract: Coarse-to-fine strategies have been extensively used for the architecture design of single image deblurring networks. Conventional methods typically stack sub-networks with multi-scale input images and gradually improve sharpness of images from the bottom sub-network to the top sub-network, yielding inevitably high computational costs. Toward a fast and accurate deblurring network design, we revisit the coarse-to-fine strategy and present a multi-input multi-output U-net (MIMO-UNet). The MIMO-UNet has three distinct features. First, the single encoder of the MIMO-UNet takes multi-scale input images to ease the difficulty of training. Second, the single decoder of the MIMO-UNet outputs multiple deblurred images with different scales to mimic multi-cascaded U-nets using a single U-shaped network. Last, asymmetric feature fusion is introduced to merge multi-scale features in an efficient manner. Extensive experiments on the GoPro and RealBlur datasets demonstrate that the proposed network outperforms the state-of-the-art methods in terms of both accuracy and computational complexity.


Contents

The contents of this repository are as follows:

  1. Dependencies
  2. Dataset
  3. Train
  4. Test
  5. Performance
  6. Model

Dependencies

  • Python
  • Pytorch (1.4)
    • Different versions may cause some errors.
  • scikit-image
  • opencv-python
  • Tensorboard

Dataset

  • Download deblur dataset from the GoPro dataset .

  • Unzip files dataset folder.

  • Preprocess dataset by running the command below:

    python data/preprocessing.py

After preparing data set, the data folder should be like the format below:

GOPRO
├─ train
│ ├─ blur    % 2103 image pairs
│ │ ├─ xxxx.png
│ │ ├─ ......
│ │
│ ├─ sharp
│ │ ├─ xxxx.png
│ │ ├─ ......
│
├─ test    % 1111 image pairs
│ ├─ ...... (same as train)


Train

To train MIMO-UNet+ , run the command below:

python main.py --model_name "MIMO-UNetPlus" --mode "train" --data_dir "dataset/GOPRO"

or to train MIMO-UNet, run the command below:

python main.py --model_name "MIMO-UNet" --mode "train" --data_dir "dataset/GOPRO"

Model weights will be saved in results/model_name/weights folder.


Test

To test MIMO-UNet+ , run the command below:

python main.py --model_name "MIMO-UNetPlus" --mode "test" --data_dir "dataset/GOPRO" --test_model "MIMO-UNetPlus.pkl"

or to test MIMO-UNet, run the command below:

python main.py --model_name "MIMO-UNet" --mode "test" --data_dir "dataset/GOPRO" --test_model "MIMO-UNet.pkl"

Output images will be saved in results/model_name/result_image folder.


Performance

Method MIMO-UNet MIMO-UNet+ MIMO-UNet++
PSNR (dB) 31.73 32.45 32.68
SSIM 0.951 0.957 0.959
Runtime (s) 0.008 0.017 0.040

Model

We provide our pre-trained models. You can test our network following the instruction above.

Comments
  • how to use multi-gpus for training

    how to use multi-gpus for training

    when i use the nn.DataParallel(model), i got the problem as follows Traceback (most recent call last): File "main.py", line 67, in main(args) File "main.py", line 31, in main _train(model, args) File "MIMO-UNet/train.py", line 53, in _train pred_img = model(input_img) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 154, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 159, in replicate return replicate(module, device_ids, not torch.is_grad_enabled()) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/replicate.py", line 88, in replicate param_copies = _broadcast_coalesced_reshape(params, devices, detach) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/replicate.py", line 71, in _broadcast_coalesced_reshape tensor_copies = Broadcast.apply(devices, *tensors) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/_functions.py", line 21, in forward outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/comm.py", line 39, in broadcast_coalesced return torch._C._broadcast_coalesced(tensors, devices, buffer_size) RuntimeError: inputs must be on unique devices

    opened by StephanPan 13
  • When training model, the error is as follows:

    When training model, the error is as follows:" No such file or directory: 'dataset/GOPRO\\valid\\blur\\'"

    Hello, when I was training MIMO-UNet model, I needed a verification set after 100 training epochs. Could you tell me how to make the verification set? The error is as follows: No such file or directory: 'dataset/GOPRO\valid\blur\'

    opened by BingY998 4
  • Training Problems

    Training Problems

    I encountered some problems when training the Go_Pro dataset on MIMO-UNet++. May I ask the author during the training process, will there be some unknown color blocks in the image generated at certain positions of the image?

    opened by XiaoBuL 3
  • Testing code with geometric self-ensemble

    Testing code with geometric self-ensemble

    Thank you for your great works. I want to evaluate the images using your model with geometric self-ensemble. Is there a testing code with self-ensemble?

    Thanks

    opened by ghost 3
  • Realblur 데이터셋에서의 실험에 대한 질문입니다.

    Realblur 데이터셋에서의 실험에 대한 질문입니다.

    안녕하십니까,

    화질 개선 연구를 진행하고 있는 석사과정 학생입니다. 선배님의 훌륭한 연구에 감사 드립니다. 올려주신 MIMO-UNet 모델을 Realblur-J데이터셋에서 PSNR을 확인한 결과 28.99가 나왔습니다. 제가 성능 평가 방식에 문제가 있는지 확인 해 주실 수 있으십니까? dataset함수는 제가 직접 구현했으며, Reablur-J 데이터셋의 Test_list 파일을 통해 데이터를 불러오는 방식으로 구현했습니다. PSNR은 올려주신 방식과 동일하게 구했으며, 이미지 사이즈가 8의 배수가 될 수 있도록 zero padding후 forward할 수 있도록했습니다. 이후 출력 이미지를 잘라서 Ground truth와 비교하여 평가했습니다.

    감사합니다.

    opened by jhkim0759 2
  • loss implementation different from paper

    loss implementation different from paper

    Hi,

    Thanks for sharing this great work~ I found your implementation of loss here is different from the paper, where the "1/t_k" normalizationweight is not used in your code. Could you help clarify this?

    opened by vztu 2
  • about the training time

    about the training time

    Hello. Thanks for your great work and code. I train it using the Tesla V100 16G, and it takes about 4 hours for 50 epochs. Could you please tell me how long it took you to train for 3000 epochs? Thanks in advance.

    opened by c-yn 1
  • the image shown in the paper

    the image shown in the paper

    In the Page 5 in paper (Figure 5. Several examples on the GoPro test dataset. ) I found the data in the Gopro dataset, the image is “test/blur/GOPR0854_11_00/001653_3.png” right? I don't see the original image as blurry as the image shown in the paper. Were you testing the image with the original Gopro dataset?

    opened by mmpmmpmmpjosh 1
  • save deblur image in eval phase

    save deblur image in eval phase

    In line 53 of eval.py(show as below), the output tensor is turned to PIL image after adding 0.5/255, then is saved. Why is this? This tensor has already been clipped to 0-1. I can't find the same processing way in the official document of torchvision

    if args.save_image:
        save_name = os.path.join(args.result_dir, name[0])
        pred_clip += 0.5 / 255
        pred = F.to_pil_image(pred_clip.squeeze(0).cpu(), 'RGB')
        pred.save(save_name)
    
    opened by YellowOrz 1
  • RealBlur data for training

    RealBlur data for training

    Hello,

    How do you train the model for RealBlur? Do you fine-tune on the GoPro pre-trained weights or train form scratch with GoPro+BSD+RealBlurR datasets like MPRNet? And, do you plan to release the deblurring results of RealBlur_J and RealBlur_R?

    opened by pp00704831 1
  • SSIM

    SSIM

    When you test SSIM ,did you use the function from skimage.metrics import structural_similarity?And set structural_similarity(p_numpy, label_numpy, data_range=1,multichannel=True)

    opened by EKChloe 0
  • Could you please share the data and figure code of the algorithm comparison plot in the README ?

    Could you please share the data and figure code of the algorithm comparison plot in the README ?

    Hi~ @chosj95 Thanks for sharing your nice work and conducting careful comparison of the runtime performance. I want to compare my algorithm with the existing algorithms shown in the figure, could you please share the code and data for drawing this figure? Thanks a lot!

    opened by dawnlh 0
  • What is the actual effect of the model?

    What is the actual effect of the model?

    Hello, author, is your model suitable for mixed image restoration of motion blur and focus blur? Moreover, after training with motion blur data only, I tested the normal and focus blur images, and the results were extremely poor.

    opened by Bigtuo 0
  • A question about results on RealBlur dataset

    A question about results on RealBlur dataset

    Hello, thanks for your excellent work. I have some confusions about the dataset you used for RealBlur. Could you please tell me which dataset(s) did you use to obtain the results in Table 2, both GoPro and RealBlur or just only RealBlur itself. MPRNET only used RealBlur to get the RealBlur results.

    Thanks.

    opened by c-yn 0
  • Line 53 in eval.py

    Line 53 in eval.py

    Thank you for your great work!

    I have a question on evaluation code. In line 53 of eval.py: pred_clip += 0.5 / 255 0.5/255 is added. What does that mean? and the code calculates PSNR with unadded one(pred_numpy). Why is it?

    Thank you.

    opened by dkmv0623 0
  • About the license for this model

    About the license for this model

    Thank you for sharing your great code. :smiley_cat:

    What is the license for this model? I'd like to cite it to the repository I'm working on if possible, but I want to post the license correctly. https://github.com/PINTO0309/PINTO_model_zoo

    Thank you.

    opened by PINTO0309 0
Owner
Sungjin Cho
Ph.D Student at Korea University
Sungjin Cho
The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"

Swin-Unet The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"(https://arxiv.org/abs/2105.05537). A validatio

null 869 Jan 7, 2023
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Jan 7, 2023
Implementation of Uformer, Attention-based Unet, in Pytorch

Uformer - Pytorch Implementation of Uformer, Attention-based Unet, in Pytorch. It will only offer the concat-cross-skip connection. This repository wi

Phil Wang 72 Dec 19, 2022
This is a file about Unet implemented in Pytorch

Unet this is an implemetion of Unet in Pytorch and it's architecture is as follows which is the same with paper of Unet component of Unet Convolution

Dragon 1 Dec 3, 2021
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
Implementation detail for paper "Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet"

Multi-level-colonoscopy-malignant-tissue-detection-with-adversarial-CAC-UNet Implementation detail for our paper "Multi-level colonoscopy malignant ti

CVSM Group -  email: czhu@bupt.edu.cn 84 Nov 22, 2022
Implementation of UNet on the Joey ML framework

Independent Research Project - Code Joey can be cloned from here https://github.com/devitocodes/joey/. Devito and other dependencies such as PyTorch a

Navjot Kukreja 1 Oct 21, 2021
Implementation of UNET architecture for Image Segmentation.

Semantic Segmentation using UNET This is the implementation of UNET on Carvana Image Masking Kaggle Challenge About the Dataset This dataset contains

Anushka agarwal 4 Dec 21, 2021
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
A unet implementation for Image semantic segmentation

Unet-pytorch a unet implementation for Image semantic segmentation 参考网上的Unet做分割的代码,做了一个针对kaggle地盐识别的,请去以下地址获取数据集: https://www.kaggle.com/c/tgs-salt-id

Rabbit 3 Jun 29, 2022
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

null 5 Nov 21, 2022
Hippocampal segmentation using the UNet network for each axis

Hipposeg Hippocampal segmentation using the UNet network for each axis, inspired by https://github.com/MICLab-Unicamp/e2dhipseg Red: False Positive Gr

Juan Carlos Aguirre Arango 0 Sep 2, 2021
unet-family: Ultimate version

unet-family: Ultimate version 基于之前my-unet代码,我整理出来了这一份终极版本unet-family,方便其他人阅读。 相比于之前的my-unet代码,代码分类更加规范,有条理 对于clone下来的代码不需要修改各种复杂繁琐的路径问题,直接就可以运行。 并且代码有

null 2 Sep 19, 2022
3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

null 1 Jan 7, 2022
The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network.

UNet-SIDE The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network. For Super Reso

TIANTIAN XU 1 Jan 13, 2022
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Kai Zhang 312 Jan 7, 2023
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

BG Kim 3 Oct 6, 2022