MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

Overview

MASA-SR

Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution

Dependencies

  • python 3
  • pytorch >= 1.1.0
  • torchvision >= 0.4.0

Prepare Dataset

  1. Download CUFED train set and CUFED test set
  2. Place the datasets in this structure:
    CUFED
    ├── train
    │   ├── input
    │   └── ref 
    └── test
        └── CUFED5  
    

Get Started

  1. Clone this repo
    git clone https://github.com/Jia-Research-Lab/MASA-SR.git
    cd MASA-SR
    
  2. Download the dataset. Modify the argument --data_root in test.py and train.py according to your data path.

Evaluation

  1. Download the pre-trained models and place them into the pretrained_weights/ folder

    • Pre-trained models can be downloaded from Google Drive
      • masa_rec.pth: trained with only reconstruction loss
      • masa.pth: trained with all losses
  2. Run test.sh. See more details in test.sh (if you are using cpu, please add --gpu_ids -1 in the command)

    sh test.sh
    
  3. The testing results are in the test_results/ folder

Training

  1. First train masa-rec only with the reconstruction loss.
    python train.py --use_tb_logger --data_augmentation --max_iter 160 --loss_l1 --name train_masa_rec
    
  2. After getting masa-rec, train masa with all losses, which is based on the pretrained masa-rec.
    python train.py --use_tb_logger --max_iter 50 --loss_l1 --loss_adv --loss_perceptual --name train_masa_gan --resume ./weights/train_masa_rec/snapshot/net_best.pth --resume_optim ./weights/train_masa_rec/snapshot/optimizer_G_best.pth --resume_scheduler ./weights/train_masa_rec/snapshot/scheduler_best.pth
    
  3. The training results are in the weights/ folder
Comments
  • SR_scale=2 has problems

    SR_scale=2 has problems

    Hi, when i set the sr_scale from 4 to 2, there would be an error: RuntimeError: Caught RuntimeError in replica 0 on device 4. RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

    It is in MASA_arch.py", line 122, fea_L1 = self.blk_L1(self.act(self.conv_L1(x))) I wonder what is happening

    opened by notorious-eric 2
  • Question of the downsample of encoder

    Question of the downsample of encoder

    Dear author. The paper says that "The encoder consists of three building blocks – the second and third blocks halve the size of the feature maps with stride 2". However, the code's param "n_blks = [4, 4, 4]" in class MASA indicates that the second and third blocks have 2^4 times downsampling by stride 2, which is inconsistent with the paper. So, how to explain that?

    opened by tiger990111 1
  • Code Optimization

    Code Optimization

    Hi, I found this part of code can be optimized. When idx_x1.size(0) is a big number, the loop takes a lot of time. https://github.com/dvlab-research/MASA-SR/blob/9f0cccb71beafa764dadc5984fc8751cc0740c79/models/archs/MASA_arch.py#L343-L350 And here is my reimplementation

    ind_y = idx_y1.view(-1, 1).repeat(1, diameter_x * s).view(-1)
    ind_x = idx_x1.unsqueeze(-1).repeat(1, 1, diameter_y * s).permute(0, 2, 1).contiguous().view(-1)
    
    opened by Wardwarf-Li 1
  • Quite bad results on the linked test data ?

    Quite bad results on the linked test data ?

    Hi!

    First of all I have to say that I am quite impressed by the possibility your current work has shown. However, after downloading the test data that you linked on your repo (CUFED5) and running the shell script for testing it using both the pre-trained models made available , I have been quite disappointed by the accompanying results :

    image

    (the reference images have been scaled to just fit into the presentation but I think they are of 500x332 resolution)

    I have not looked too much into the test script that you have provided but I did look at the architecture you proposed in the paper and here are some observations/comments/questions I have:

    1. shouldn't the Reference Image Resolution be 4 times the LR ? (this does not seem to be the case with the data provided)

    2. currently upon running the shell script for test, the Resolution seems to be the same as the input, so is the shared implementation more like detail enhancement rather than SR ?

    3. The input and Reference look to be of the same quality, so technically I would not be expecting massive improvements in the SR result, however if you could do something like DSLR based reference on a phones camera, it would be quite novel and interesting experiment. (This is the goal I hope to achieve using your methodology)

    4. More theoretically, what kind of results would you expect when the input and reference are completely mismatched ?

    Extremely sorry for the many naive questions that I might have asked but I hope to hear your views on the same!

    opened by mylifeasazucchini 1
  • KeyError: 'RANK'

    KeyError: 'RANK'

    Hello, Author. When I am testing with the code, I get the following error message:

    PS D:\pythonProject\7_4\MASA-SR-main\MASA-SR-main> python test.py --resume './pretrained_weights/masa.pth' --testset TestSet_multi --name masa_TestSet_multi Traceback (most recent call last): File "test.py", line 95, in main() File "test.py", line 76, in main init_dist() File "D:\pythonProject\7_4\MASA-SR-main\MASA-SR-main\train.py", line 22, in init_dist rank = int(os.environ['RANK']) File "D:\python3.7\lib\os.py", line 681, in getitem raise KeyError(key) from None KeyError: 'RANK'

    I hope you can reply to me, thanks.

    opened by CodeMadUser 0
  • Question about variables in code

    Question about variables in code

    Thanks for your great work!

    I have some questions about definition of variables in MASA_arch.py. In forward of class MASA,

    1. What do k_x and k_y mean, different from self.lr_block_size ?
    2. Does diameter_x means the horizontal block size in Ref_LR ?
    3. If k_x and k_y mean block_size, why using kernel_size of (k_x+2, k_y+2) when unfolding LR features, which is in line 374.

    Looking forward to your reply and thanks in advance!

    opened by mrluin 0
  • the result is abnormal

    the result is abnormal

    Thank you very much for your code and contribution, but I have some confusion, I use the data and weights you provided, and the result is like this, want to ask if there is a problem with this 000_0

    opened by Winnie202 2
  • Error : in MASA_arch.py

    Error : in MASA_arch.py

    image RefSR/MASA-SR/models/archs/MASA_arch.py", line 435, in forward warp_ref_patches_x1 = warp_ref_patches_x1.view(N, C, H, W) RuntimeError: shape '[1, 64, 228, 228]' is invalid for input of size 3211264

    it keeps making error..

    Do you know why this occurs?

    opened by wjdghakswp0866 1
  • Where to get Sun80 Dataset?

    Where to get Sun80 Dataset?

    Absolutely amazing work, thanks for sharing. Could you please share with me where did you get the SUN80 dataset? I am having trouble finding it. Thanks in advance.

    opened by umutsuluhan 1
Owner
DV Lab
Deep Vision Lab
DV Lab
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 104 Jan 5, 2023
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 6 Oct 20, 2021
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 5, 2023
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching.

LPM_Python A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching. The code is established ac

AoxiangFan 11 Nov 7, 2022
PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021)

PyTorch code for the paper "Curriculum Graph Co-Teaching for Multi-target Domain Adaptation" (CVPR2021) This repo presents PyTorch implementation of M

Evgeny 79 Dec 19, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
(CVPR2021) DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation

DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation CVPR2021(oral) [arxiv] Requirements python3.7 pytorch==

W-zx-Y 85 Dec 7, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

null 106 Dec 28, 2022