[ICCV 2021] Learning A Single Network for Scale-Arbitrary Super-Resolution

Related tags

Deep Learning ArbSR
Overview

ArbSR

Pytorch implementation of "Learning A Single Network for Scale-Arbitrary Super-Resolution", ICCV 2021

[Project] [arXiv]

Highlights

  • A plug-in module to extend a baseline SR network (e.g., EDSR and RCAN) to a scale-arbitrary SR network with small additional computational and memory cost.
  • Promising results for scale-arbitrary SR (both non-integer and asymmetric scale factors) while maintaining the state-of-the-art performance for SR with integer scale factors.

Demo

gif

Motivation

Although recent CNN-based single image SR networks (e.g., EDSR, RDN and RCAN) have achieved promising performance, they are developed for image SR with a single specific integer scale (e.g., x2, x3, x4). In real-world applications, non-integer SR (e.g., from 100x100 to 220x220) and asymmetric SR (e.g., from 100x100 to 220x420) are also necessary such that customers can zoom in an image arbitrarily for better view of details.

Overview

overview

Requirements

  • Python 3.6
  • PyTorch == 1.1.0
  • numpy
  • skimage
  • imageio
  • cv2

Train

1. Prepare training data

1.1 Download DIV2K training data (800 training images) from DIV2K dataset or SNU_CVLab.

1.2 Cd to ./utils and run gen_training_data.m in Matlab to prepare HR/LR images in your_data_path as belows:

your_data_path
└── DIV2K
	├── HR
		├── 0001.png
		├── ...
		└── 0800.png
	└── LR_bicubic
		├── X1.10
			├── 0001.png
			├── ...
			└── 0800.png
		├── ...
		└── X4.00_X3.50
			├── 0001.png
			├── ...
			└── 0800.png

2. Begin to train

Run ./main.sh to train on the DIV2K dataset. Please update dir_data in the bash file as your_data_path.

Test

1. Prepare test data

1.1 Download benchmark datasets (e.g., Set5, Set14 and other test sets).

1.2 Cd to ./utils and run gen_test_data.m in Matlab to prepare HR/LR images in your_data_path as belows:

your_data_path
└── benchmark
	├── Set5
		├── HR
			├── baby.png
			├── ...
			└── woman.png
		└── LR_bicubic
			├── X1.10
				├── baby.png
				├── ...
				└── woman.png
			├── ...
			└── X4.00_X3.50
				├── baby.png
				├── ...
				└── woman.png
	├── Set14
	├── B100
	├── Urban100
	└── Manga109
		├── HR
			├── AisazuNihalrarenai.png
			├── ...
			└── YouchienBoueigumi.png
		└── LR_bicubic
			├── X1.10
				├── AisazuNihalrarenai.png
				├── ...
				└── YouchienBoueigumi.png
			├── ...
			└── X4.00_X3.50
				├── AisazuNihalrarenai.png
				├── ...
				└── YouchienBoueigumi.png

2. Begin to test

Run ./test.sh to test on benchmark datasets. Please update dir_data in the bash file as your_data_path.

Quick Test on An LR Image

Run ./quick_test.sh to enlarge an LR image to an arbitrary size. Please update dir_img in the bash file as your_img_path.

Visual Results

1. SR with Symmetric Scale Factors

non-integer

2. SR with Asymmetric Scale Factors

asymmetric

3. SR with Continuous Scale Factors

Please try our interactive viewer.

Citation

@InProceedings{Wang2020Learning,
  title={Learning A Single Network for Scale-Arbitrary Super-Resolution},
  author={Longguang Wang, Yingqian Wang, Zaiping Lin, Jungang Yang, Wei An, and Yulan Guo},
  booktitle={ICCV},
  year={2021}
}

Acknowledgements

This code is built on EDSR (PyTorch) and Meta-SR. We thank the authors for sharing the codes.

Comments
  • about train

    about train

    I don't know what's going on. I train completely according to the readme description, but the result I get is particularly bad. PSNR is only about 10.

    opened by CCjiahao 2
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @LongguangWang ! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View the model demo here: https://replicate.ai/longguangwang/arbsr (While implementing the model for replicate, we realised that when the width or height of the input image is an odd number, nn.AvgPool2d(2) in class SA_adapt (line 162 inmodel/arbrcan.py) will bring the size to an even number resulting an error caused by unmatched size of the mask and the input, so we cropped the input to the closest even size. )

    Do claim your page here so you can own the page, customise the Example gallery as you like, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 1
  • Assertion Error

    Assertion Error

    File "quick_test.py", line 28, in assert args.sr_size[0] / lr.size(2) > 1 and args.sr_size[0] / lr.size(2) <= 4 AssertionError

    Can you help me fix the above issue I get when I try the quick_test.py? Thank you!

    opened by anandhiray 0
  • Cannot get the objective quality(PSNR/SSIM)

    Cannot get the objective quality(PSNR/SSIM)

    Hi, Longguang,

    Thank your for providing the resource code and the pre-trained model. It is a wonderful and elegant work. However, when I use the pre-trained model or the retrained model according to your training code to test the Set5 dataset, there are some problems as follows:

    1. Compared with the bucubic method, the pre-trained or retrained model cannot get good the objective quality in terms of PSNR and SSIM. It is X2 super-resolution. The related data is attached. I don't know why this result is so strange.

    psnr ssim

    1. It should be mentioned that the subjective quality of the super-resolution image from pretrained or retrained model is obviously better than that from bicubic.

    2. In addition, when I have this problem, I tested these model in some other images( not public). And the related results are similar as above mentioned. The traditional quality measures cannot have good performance, while the subjective quality is better than bicubic.

    As a result, I am confused why this happened. Sincerely hope that I can get your help at your convenience.

    Regards, Bolin

    opened by Berlin0610 1
  • doubt regarding computation of LR space and HR space

    doubt regarding computation of LR space and HR space

    Hi thanks for your such interesting work, Can you please tell me why we need to subtract 0.5 and while calculating the coordinates in LR space. coor_h = ((coor_hr[0] + 0.5) / scale) - (torch.floor((coor_hr[0] + 0.5) / scale + 1e-3)) - 0.5

    opened by nan-rock 2
Owner
Longguang Wang
Longguang Wang
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 1, 2023
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
Augmentation for Single-Image-Super-Resolution

SRAugmentation Augmentation for Single-Image-Super-Resolution Implimentation CutBlur Cutout CutMix Cutup CutMixup Blend RGBPermutation Identity OneOf

Yubo 6 Jun 27, 2022
Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN

Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN Introduction Image super-resolution (SR) is the process of recovering high-resoluti

null 8 Apr 15, 2022
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

null 148 Dec 30, 2022
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

Longguang Wang 318 Dec 24, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 104 Jan 5, 2023
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

null 81 Dec 28, 2022