📖
Deep Attentional Guided Image Filtering
[Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji
Harbin Institute of Technology, Tsinghua University
Abstract
Guided filter is a fundamental tool in computer vision and computer graphics which aims to transfer structure information from guidance image to target image. Most existing methods construct filter kernels from the guidance itself without considering the mutual dependency between the guidance and the target. However, since there typically exist significantly different edges in the two images, simply transferring all structural information of the guidance to the target would result in various artifacts. To cope with this problem, we propose an effective framework named deep attentional guided image filtering, the filtering process of which can fully integrate the complementary information contained in both images. Specifically, we propose an attentional kernel learning module to generate dual sets of filter kernels from the guidance and the target, respectively, and then adaptively combine them by modeling the pixel-wise dependency between the two images. Meanwhile, we propose a multi-scale guided image filtering module to progressively generate the filtering result with the constructed kernels in a coarse-to-fine manner. Correspondingly, a multi-scale fusion strategy is introduced to reuse the intermediate results in the coarse-to-fine process. Extensive experiments show that the proposed framework compares favorably with the state-of-the-art methods in a wide range of guided image filtering applications, such as guided super-resolution, cross-modality restoration, texture removal, and semantic segmentation.
This repository is an official PyTorch implementation of the paper "Deep Attentional Guided Filtering"
🔧
Dependencies and Installation
- Python >= 3.5 (Recommend to use Anaconda or Miniconda)
- [PyTorch >= 1.2(https://pytorch.org/
- NVIDIA GPU + CUDA
Installation
-
Clone repo
git https://github.com/zhwzhong/DAGF.git cd DAGF
-
Install dependent packages
pip install -r requirements.txt
Dataset
Trained Models
You can directly download the trained model and put it in checkpoints:
Train
You can also train by yourself:
python main.py --scale=16 --save_real --dataset_name='NYU' --model_name='DAGF'
Pay attention to the settings in the option (e.g. gpu id, model_name).
Test
We provide the processed test data in 'test_data' and pre-trained models in 'pre_trained' With the trained model, you can test and save depth images.
python quick_test.py
Acknowledgments
- Thank for NYU, Lu, Middlebury, Sintel and DUT-OMRON datasets. % - Thank authors of GF, DJFR, DKN, PacNet, DSRN, JBU, Yang, DGDIE, DMSG, TGV, SDF and FBS for sharing their codes.
TO DO
- Release the trained models for compared models:
- Release the experimental resutls of the compared models.
🏅
Our method won the Real DSR Challenge in ICMR 2021.
The detail information can be fond here.
If you have any question, please email [email protected]