[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

Overview

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring

License CC BY-NC

Checkout for the demo (GUI/Google Colab)!
The GUI version might occasionally be offline

This repository contains the official PyTorch implementation of the following paper:

Iterative Filter Adaptive Network for Single Image Defocus Deblurring
Junyong Lee, Hyeongseok Son, Jaesung Rim, Sunghyun Cho, Seungyong Lee, CVPR 2021

About the Research

Click here

Iterative Filter Adaptive Network (IFAN)

Our deblurring network is built upon a simple encoder-decoder architecture consisting of a feature extractor, reconstructor, and IFAN module in the middle. The feature extractor extracts defocused features and feeds them to IFAN. IFAN removes blur in the feature domain by predicting spatially-varying deblurring filters and applying them to the defocused features using IAC. The deblurred features from IFAN is then passed to the reconstructor, which restores an all-in-focus image.

Iterative Adaptive Convolution Layer

The IAC layer iteratively computes feature maps as follows (refer Eq. 1 in the main paper):

Separable filters in our IAC layer play a key role in resolving the limitation of the FAC layer. Our IAC layer secures larger receptive fields at much lower memory and computational costs than the FAC layer by utilizing 1-dim filters, instead of 2-dim convolutions. However, compared to dense 2-dim convolution filters in the FAC layer, our separable filters may not provide enough accuracy for deblurring filters. We handle this problem by iteratively applying separable filters to fully exploit the non-linear nature of a deep network. Our iterative scheme also enables small-sized separable filters to be used for establishing large receptive fields.

Disparity Map Estimation & Reblurring

To further improve the single image deblurring quality, we train our network with novel defocus-specific tasks: defocus disparity estimation and reblurring.

Disparity Map Estimation exploits dual-pixel data, which provides stereo images with a tiny baseline, whose disparities are proportional to defocus blur magnitudes. Leveraging dual-pixel stereo images, we train IFAN to predict the disparity map from a single image so that it can also learn to more accurately predict blur magnitudes.

Reblurring, motivated by the reblur-to-deblur scheme, utilizes deblurring filters predicted by IFAN for reblurring all-in-focus images. For accurate reblurring, IFAN needs to predict deblurring filters that contain accurate information about the shapes and sizes of defocus blur. Based on this, during training, we introduce an additional network that inverts predicted deblurring filters to reblurring filters, and reblurs an all-in-focus image.

The Real Depth of Field (RealDOF) test set

We present the Real Depth of Field (RealDOF) test set for quantitative and qualitative evaluations of single image defocus deblurring. Our RealDOF test set contains 50 image pairs, each of which consists of a defocused image and its corresponding all-in-focus image that have been concurrently captured for the same scene, with the dual-camera system. Refer Sec. 1 in the supplementary material for more details.

Getting Started

Prerequisites

Tested environment

Ubuntu Python PyTorch CUDA

  1. Environment setup

    $ git clone https://github.com/codeslake/IFAN.git
    $ cd IFAN
    
    $ conda create -y --name IFAN python=3.8 && conda activate IFAN
    # for CUDA10.2
    $ sh install_CUDA10.2.sh
    # for CUDA11.1
    $ sh install_CUDA11.1.sh
  2. Datasets

    • Download and unzip test sets (DPDD, PixelDP, CUHK and RealDOF) under [DATASET_ROOT]:

      ├── [DATASET_ROOT]
      │   ├── DPDD
      │   ├── PixelDP
      │   ├── CUHK
      │   ├── RealDOF
      

      Note:

      • [DATASET_ROOT] is currently set to ./datasets/defocus_deblur/, which can be modified by config.data_offset in ./configs/config.py.
  3. Pre-trained models

    • Download and unzip pretrained weights under ./ckpt/:

      ├── ./ckpt
      │   ├── IFAN.pytorch
      │   ├── ...
      │   ├── IFAN_dual.pytorch
      

Testing models of CVPR2021

## Table 2 in the main paper
# Our final model used for comparison
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN --network IFAN --config config_IFAN --data DPDD --ckpt_abs_name ckpt/IFAN.pytorch

## Table 4 in the main paper
# Our final model with N=8
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_8 --network IFAN --config config_IFAN_8 --data DPDD --ckpt_abs_name ckpt/IFAN_8.pytorch

# Our final model with N=26
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_26 --network IFAN --config config_IFAN_26 --data DPDD --ckpt_abs_name ckpt/IFAN_26.pytorch

# Our final model with N=35
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_35 --network IFAN --config config_IFAN_35 --data DPDD --ckpt_abs_name ckpt/IFAN_35.pytorch

# Our final model with N=44
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_44 --network IFAN --config config_IFAN_44 --data DPDD --ckpt_abs_name ckpt/IFAN_44.pytorch

## Table 1 in the supplementary material
# Our model trained with 16 bit images
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_16bit --network IFAN --config config_IFAN_16bit --data DPDD --ckpt_abs_name ckpt/IFAN_16bit.pytorch

## Table 2 in the supplementary material
# Our model taking dual-pixel stereo images as an input
CUDA_VISIBLE_DEVICES=0 python run.py --mode IFAN_dual --network IFAN_dual --config config_IFAN --data DPDD --ckpt_abs_name ckpt/IFAN_dual.pytorch

Note:

  • Testing results will be saved in [LOG_ROOT]/IFAN_CVPR2021/[mode]/result/quanti_quali/[mode]_[epoch]/[data]/.
  • [LOG_ROOT] is set to ./logs/ by default. Refer here for more details about the logging.
  • Options
    • --data: The name of a dataset to evaluate. DPDD | RealDOF | CUHK | PixelDP | random. Default: DPDD
      • The folder structure can be modified in the function set_eval_path(..) in ./configs/config.py.
      • random is for testing models with any images, which should be placed as [DATASET_ROOT]/random/*.[jpg|png].

Wiki

Citation

If you find this code useful, please consider citing:

@InProceedings{Lee_2021_CVPR,
    author = {Lee, Junyong and Son, Hyeongseok and Rim, Jaesung and Cho, Sunghyun and Lee, Seungyong},
    title = {Iterative Filter Adaptive Network for Single Image Defocus Deblurring},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021}
}

Contact

Open an issue for any inquiries. You may also have contact with [email protected]

Resources

All material related to our paper is available by following links:

Link
The main paper
Supplementary
Checkpoint Files
The DPDD dataset (reference)
The PixelDP test set (reference)
The CUHK dataset (reference)
The RealDOF test set

License

This software is being made available under the terms in the LICENSE file.

Any exemptions to these terms require a license from the Pohang University of Science and Technology.

About Coupe Project

Project ‘COUPE’ aims to develop software that evaluates and improves the quality of images and videos based on big visual data. To achieve the goal, we extract sharpness, color, composition features from images and develop technologies for restoring and improving by using them. In addition, personalization technology through user reference analysis is under study.

Please checkout other Coupe repositories in our Posgraph github organization.

Useful Links

Comments
  • Using blurred images only, how to train your code??

    Using blurred images only, how to train your code??

    Hello sir, I have interesting image deblurring and i searched your code.

    I checked your code. I found that your network needs paired images (blurred image vs sharp image vs etc. ) I want to know input-data information.

    This is your config.py.

        # data dir
        # config.data_offset = '/data1/junyonglee/defocus_deblur'
        config.data_offset = 'datasets/defocus_deblur'
        config.c_path = os.path.join(config.data_offset, 'DPDD/train_c')
        config.l_path = os.path.join(config.data_offset, 'DPDD/train_l')
        config.r_path = os.path.join(config.data_offset, 'DPDD/train_r')
    

    I have blurred images only. Can i try to train your code??

    Thanks, Edward Cho.

    opened by edwardcho 20
  • How to train my own dataset

    How to train my own dataset

    Hi! Dear author: I currently have some 1280x960 clear industrial product images. Now I need to solve the problem of defocus blur during equipment operation. Do you know how to make a dataset and train it with your network? Thanks!

    opened by hust-lidelong 15
  • Missing configuration file

    Missing configuration file

    Traceback (most recent call last): File "D:/python/SCI/IFAN-main/IFAN-main/run.py", line 308, in with open('{}/config.txt'.format(config.LOG_DIR.config)) as json_file: FileNotFoundError: [Errno 2] No such file or directory: '/Bean/logs/junyonglee\IFAN_CVPR2021\IFAN\config/config.txt' Laoding Config for evaluation

    opened by radical-diligent 4
  • ZeroDivisionError

    ZeroDivisionError

    Hi @HyeongseokSon1 ,

    I have an issue by running the latest code using this command : python run.py --mode IFAN --network IFAN --config config_IFAN --data DPDD --ckpt_abs_name ckpt/IFAN.pytorch --cpu --data_offset ./test --output_offset ./test_out

    Basically I would like to test my 1 .jpg image and store in ./test

    The error is shown like this: Capture

    Here is my directory list: file

    Do you know what is the problem ? or maybe I am doing wrong ?

    opened by ikhwan12 4
  • ResolutionCard issue

    ResolutionCard issue

    Hi Lee, Thanks for your kindly share the code I train your model on my own data, it works well on most case,but on resolutioncard may introduce artifact,it show on img,can you help me?

    img

    Thanks

    opened by excllent123 4
  • I have a question.

    I have a question.

    hello Junyong Lee: Thanks to your great work! I have a question about lines 35 to 36 in 'models/IAC.py':kernel2 = kernel2.permute(0, 2, 3, 1).view(N, H, W, channels, ksize) feat_in = torch.sum(torch.mul(feat_in, kernel1), -1) Is it supposed to be this way:kernel2 = kernel2.permute(0, 2, 3, 1).view(N, H, W, channels, ksize) feat_in = torch.sum(torch.mul(feat_in, **kernel2**), -1) Thank you for your help! @codeslake

    opened by huzippm 3
  • Any chance make script demo more user friendly?

    Any chance make script demo more user friendly?

    Thank you for this very interesting project. I would like to ask you to add an easier way to implement scripts - something like python run.py --input_path --output_path. - add ability to be able to specify the folder with the incoming image and the folder with the result. I think when there will be a full demo then more people will pay attention to this project, unfortunately, not everyone uses Google Collab.

    opened by netrunner-exe 3
  • Possible syntax error in trainer.py

    Possible syntax error in trainer.py

    Hi,

    I faced config is undefined while trying to train the model. It looks like line 298 in https://github.com/codeslake/IFAN/blob/main/models/trainers/trainer.py should be self.config.mode instead of config.mode

    edit: 278->298

    opened by sreeragiyer 3
  • Run Inference with CPU

    Run Inference with CPU

    Hi @codeslake Thank you for sharing great and interesting project. I am testing your code with some blurred image input on Google colab and output great result. One quick question, currently It is running on top of GPU show fast result the performance around 30 - 40 ms. Just curious, is it possible to run your model inference using CPU ? Thank you very much.

    opened by ikhwan12 2
  • No module named 'models.archs.

    No module named 'models.archs.

    when I test it, the following error has been reported, can you tell me the reason?

    No module named 'models.archs.' File "C:\Program Files\Python3\Lib\importlib_init_.py", line 127, in import_module return _bootstrap.gcd_import(name[level:], package, level) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\models\trainers\trainer.py", line 39, in init lib = importlib.import_module('models.archs.{}'.format(config.network)) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\models_init.py", line 5, in create_model model = lib.Model(config) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\eval.py", line 44, in init model = create_model(config) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\eval.py", line 75, in eval_quan_qual input_c_file_path_list, input_l_file_path_list, input_r_file_path_list, gt_file_path_list = init(config, mode) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\eval.py", line 187, in eval eval_quan_qual(config) File "D:\virtualenv\pytorch19\code\defocus\IFAN-main\run.py", line 338, in eval(config) File "C:\Program Files\Python3\Lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Program Files\Python3\Lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Program Files\Python3\Lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Program Files\Python3\Lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Program Files\Python3\Lib\runpy.py", line 193, in _run_module_as_main (Current frame) "main", mod_spec)

    opened by TimZhang001 2
  • About th evalaution on DPDD

    About th evalaution on DPDD

    Hi, I notice that in your paper, the test results of DPDNet is different from the original paper, where in the original paper PSNR is 25.13,and in your paper PSNR is 25.23. I wonder what causes the difference?

    opened by notorious-eric 2
Owner
Junyong Lee
Ph.D candidate at POSTECH
Junyong Lee
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 6, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 6, 2023
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

null 75 Dec 28, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
The official implementation of Equalization Loss v1 & v2 (CVPR 2020, 2021) based on MMDetection.

The Equalization Losses for Long-tailed Object Detection and Instance Segmentation This repo is official implementation CVPR 2021 paper: Equalization

Jingru Tan 129 Dec 16, 2022
An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.

CLCC: Contrastive Learning for Color Constancy (CVPR 2021) Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang,

Yi-Chen (Howard) Lo 58 Dec 17, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

null 9 Nov 14, 2022