Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Overview

Deep Constrained Least Squares for Blind Image Super-Resolution

[Paper]

This is the official implementation of 'Deep Constrained Least Squares for Blind Image Super-Resolution', CVPR 2022.

Updates

[2022.03.09] We released the code and provided the pretrained model weights here.
[2022.03.02] Our paper has been accepted by CVPR 2022.

DCLS

Overview

DCLS

Dependenices

  • OS: Ubuntu 18.04
  • nvidia :
    • cuda: 10.1
    • cudnn: 7.6.1
  • python3
  • pytorch >= 1.6
  • Python packages: numpy opencv-python lmdb pyyaml

Dataset Preparation

We use DIV2K and Flickr2K as our training datasets (totally 3450 images).

To transform datasets to binary files for efficient IO, run:

python3 codes/scripts/create_lmdb.py

For evaluation of Isotropic Gaussian kernels (Gaussian8), we use five datasets, i.e., Set5, Set14, Urban100, BSD100 and Manga109.

To generate LRblur/LR/HR/Bicubic datasets paths, run:

python3 codes/scripts/generate_mod_blur_LR_bic.py

For evaluation of Anisotropic Gaussian kernels, we use DIV2KRK.

(You need to modify the file paths by yourself.)

Train

  1. The core algorithm is in codes/config/DCLS.
  2. Please modify codes/config/DCLS/options to set path, iterations, and other parameters...
  3. To train the model(s) in the paper, run below commands.

For single GPU:

cd codes/config/DCLS
python3 train.py -opt=options/setting1/train_setting1_x4.yml

For distributed training

cd codes/config/DCLS
python3 -m torch.distributed.launch --nproc_per_node=4 --master_poer=4321 train.py -opt=options/setting1/train_setting1_x4.yml --launcher pytorch

Or choose training options use

cd codes/config/DCLS
sh demo.sh

Evaluation

To evalute our method, please modify the benchmark path and model path and run

cd codes/config/DCLS
python3 test.py -opt=options/setting1/test_setting1_x4.yml

Results

Comparison on Isotropic Gaussian kernels (Gaussian8)

ISO kernel

Comparison on Anisotropic Gaussian kernels (DIV2KRK)

ANISO kernel

Citations

If our code helps your research or work, please consider citing our paper. The following is a BibTeX reference.

@article{luo2022deep,
  title={Deep Constrained Least Squares for Blind Image Super-Resolution},
  author={Luo, Ziwei and Huang, Haibin and Yu, Lei and Li, Youwei and Fan, Haoqiang and Liu, Shuaicheng},
  journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Contact

email: [[email protected]]

Acknowledgement

This project is based on [DAN], [MMSR] and [BasicSR].

Comments
  • issue about x2 SR training process

    issue about x2 SR training process

    image Hi, when I train the model under setting1_x2, I can not obtain a good result. Maybe I missed something in my training. Could you give some advice on it?

    #### general settings
    name: DCLSx2_setting1
    use_tb_logger: true
    model: blind
    distortion: sr
    scale: 2
    gpu_ids: [0, 1, 2, 3]
    pca_matrix_path: ../../../pca_matrix/DCLS/pca_matrix.pth
    
    degradation:
      random_kernel: True
      ksize: 21
      code_length: 10
      sig_min: 0.2
      sig_max: 2.0
      rate_iso: 1.0
      random_disturb: false
    
    #### datasets
    datasets:
      train:
        name: DIV2K
        mode: GT
        dataroot_GT: /datasets/DF2K/HR/x2HR.lmdb
    
        use_shuffle: true
        n_workers: 4  # per GPU
        batch_size: 64
        GT_size: 128
        LR_size: 64
        use_flip: true
        use_rot: true
        color: RGB
      val:
        name: Set5
        mode: LQGT
        dataroot_GT: /datasets/Set5/x2HR.lmdb
        dataroot_LQ: /datasets/Set5/x2LRblur.lmdb
    
    #### network structures
    network_G:
      which_model_G: DCLS
      setting:
        nf: 64
        nb: 10
        ng: 5
        input_para: 256
        kernel_size: 21
    
    #### path
    path:
      pretrain_model_G: ~
      strict_load: true
      resume_state: ~
    
    #### training settings: learning rate scheme, loss
    train:
      lr_G: !!float 4e-4
      lr_E: !!float 4e-4
      lr_scheme: MultiStepLR
      beta1: 0.9
      beta2: 0.99
      niter: 500000
      warmup_iter: -1  # no warm up
      lr_steps: [200000, 400000]
      lr_gamma: 0.5
      eta_min: !!float 1e-7
    
      pixel_criterion: l1
      pixel_weight: 1.0
    
      manual_seed: 0
      val_freq: !!float 100
    
    #### logger
    logger:
      print_freq: 20
      save_checkpoint_freq: !!float 1000
    
    

    This is the config settings. Thank you for your reply!

    opened by wwlCape 5
  • sos

    sos

    您好,您在CLS的forward()方法中有一个循环16次的语句: for i in range(feature_pad.shape[1]): #对16个通道中的每个通道进行操作 feature_ch = feature_pad[:, i:i+1, :, :] clear_feature_ch = get_uperleft_denominator(feature_ch, kernel, kernel_P[:, i:i+1, :, :]) clear_features[:, i:i+1, :, :] = clear_feature_ch[:, :, ks:-ks, ks:-ks] 其中get_uperleft_denominator(feature_ch, kernel, kernel_P[:, i:i+1, :, :])方法我看了好多遍,完全看不懂,只知道这个方法是对经过填充的原始特征,预测的平滑滤波器,模糊核进行一些傅里叶变换操作。您能对下列的4~8行解释下这几行代码是在干嘛吗?跪跪谢! ps(第四行方法里进行的循环移位操作看的怀疑人生,能看懂但是不知道为什么要这样做!哭)

    1.# ------------------------------------------------------ 2.# -----------Constraint Least Square Filter------------- 3.def get_uperleft_denominator(img, kernel, grad_kernel): 4. ker_f = convert_psf2otf(kernel, img.size()) # discrete fourier transform of kernel 对模糊核进行离散傅里叶操作 5. ker_p = convert_psf2otf(grad_kernel, img.size()) # discrete fourier transform of kernel 6. denominator = inv_fft_kernel_est(ker_f, ker_p) 7. numerator = torch.rfft(img, 3, onesided=False) 8. deblur = deconv(denominator, numerator) 9. return deblur

    opened by fenghao195 3
  • 训练数据准备的问题

    训练数据准备的问题

    您好,在您的README中“Dataset Preparation”节写到“To transform datasets to binary files for efficient IO, run:”,但是当我运行create_lmdb.py文件时,并不知道其中的img_folder应该指向什么样的训练数据路径。换句话说,在生成训练数据时,我应该将img_folder设置为DIV2K的HR图像,还是DIV2Kx4下的LR图像?

    opened by EchoXu98 4
  • How can we use this model to train on real low resolution images without super-resolution images?

    How can we use this model to train on real low resolution images without super-resolution images?

    I am a new researcher in super-resolution work, for Blind Image Super-Resolution, I think we cannot get the SR image information and true kernel. But in your work, you have used this information, maybe this is not Blind Image Super-Resolution?

    opened by Jiaxin-lucky 1
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • about paper

    about paper

    您好,关于您论文中3.3节,有几个问题: 1、公式(10)是怎么得到的,需要最小化的不应该是||GiX↓s—Ri||2吗,就像Deep Wiener Decovolution中的公式(4a)。 2、为什么在特征空间中,平滑滤波器P和拉格朗日乘子可能不一致?如何理解? 3、为什么利用神经网络也就是代码中的self.grad_filter可以预测出来一组具有隐式拉格朗日乘数的平滑滤波器,怎么体现的呢?什么原理?

    opened by xanxuso 3
Owner
MEGVII Research
Power Human with AI. 持续创新拓展认知边界 非凡科技成就产品价值
MEGVII Research
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 1, 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

null 150 Dec 26, 2022
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News ?? 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

null 31 Sep 27, 2022
Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion This repository contains a pytorch implementation of "Learning to Listen: Modeling

null 50 Dec 17, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 3, 2023
This project is the PyTorch implementation of our CVPR 2022 paper:

Requirements and Dependency Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.8.11 and pytorch 1.7.0) (For visualization if

Lei Huang 23 Nov 29, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 37 Nov 27, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 7, 2023
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

NVIDIA Research Projects 1.4k Jan 1, 2023
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

?? Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022