The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Overview

Pixel-level Self-Paced Learning for Super-Resolution

This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resolution, which has been accepted by ICASSP 2020.

[arxiv][PDF]

trained model files: Baidu Pan(code: v0be)

Requirements

This code is forked from thstkdgus35/EDSR-PyTorch. In the light of its README, following libraries are required:

  • Python 3.6+ (Python 3.7.0 in my experiments)
  • PyTorch >= 1.0.0 (1.1.0 in my experiments)
  • numpy
  • skimage
  • imageio
  • matplotlib
  • tqdm

Core Parts

pspl framework

Detail code can be found in Loss.forward, which can be simplified as:

# take L1 Loss as example

import torch
import torch.nn as nn
import torch.nn.functional as F
from . import pytorch_ssim

class Loss(nn.modules.loss._Loss):
    def __init__(self, spl_alpha, spl_beta, spl_maxVal):
        super(Loss, self).__init__()
        self.loss = nn.L1Loss()
        self.alpha = spl_alpha
        self.beta = spl_beta
        self.maxVal = spl_maxVal

    def forward(self, sr, hr, step):
        # calc sigma value
        sigma = self.alpha * step + self.beta
        # define gauss function
        gauss = lambda x: torch.exp(-((x+1) / sigma) ** 2) * self.maxVal
        # ssim value
        ssim = pytorch_ssim.ssim(hr, sr, reduction='none').detach()
        # calc attention weight
        weight = gauss(ssim).detach()
        nsr, nhr = sr * weight, hr * weight
        # calc loss
        lossval = self.loss(nsr, nhr)
        return lossval

the library pytorch_ssim is focked from Po-Hsun-Su/pytorch-ssim and rewrite some details for adopting it to our requirements.

Attention weight values change according to SSIM Index and steps: attention values

Citation

If you find this project useful for your research, please cite:

@inproceedings{lin2020pixel,
  title={Pixel-Level Self-Paced Learning For Super-Resolution}
  author={Lin, Wei and Gao, Junyu and Wang, Qi and Li, Xuelong},
  booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={2538-2542}
}
You might also like...
Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

Path-Generator-QA This is a Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Common

PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "

Foley Music: Learning to Generate Music from Videos This repo holds the code for the framework presented on ECCV 2020. Foley Music: Learning to Genera

The pytorch implementation of the paper
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Code for ACM MM 2020 paper
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

TensorFlow code for the neural network presented in the paper:
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Comments
  • Could you please specify the usage of your code? THX

    Could you please specify the usage of your code? THX

    Hi Elin

    Thanks for your work, recently I want to reproduce the result of your method. Can you specify the work-flow of your paper, for convenient usage for others to use.

    Thanks !

    opened by JustinAsdz 4
  • Question regarding the effect of the similarity map

    Question regarding the effect of the similarity map

    I was curious as to what the effect of the similarity map was, so I added a few lines of code to the forward function of the Loss class, to write out sr[0] and hr[0] patches before and after multiplication by weight=gauss(ssim).detach(), for batch=1 of each epoch. My training command was:

    python main.py --model EDSR --scale 4 --data_test Set5+Set14+B100+Urban100+DIV2K --n_GPUs 1 --epochs 300
    

    For clarification, all arguments are:

    Namespace(G0=64, RDNconfig='B', RDNkSize=3, act='relu', batch_size=16, betas=(0.9, 0.999), chop=False, cpu=False, data_range='1-800/801-900', data_test=['Set5', 'Set14', 'B100', 'Urban100', 'DIV2K'], data_train=['DIV2K'], debug=False, decay='200', dilation=False, dir_data='../x_imagedata', dir_demo='../test', disable_PSPL=False, epochs=300, epsilon=1e-08, ext='sep', extend='.', gamma=0.5, gan_k=1, gclip=0, load='', loss='1*L1', lr=0.0001, model='EDSR', momentum=0.9, n_GPUs=1, n_colors=3, n_feats=64, n_layers=8, n_resblocks=16, n_resgroups=10, n_threads=6, negative_slope=0.2, no_augment=False, optimizer='ADAM', patch_size=192, pre_train='', precision='single', print_every=250, reduction=16, res_scale=1, reset=False, resume=0, rgb_range=255, save='EDSR_04-08_22-15-40', save_gt=False, save_models=False, save_results=False, scale=[4], seed=1, self_ensemble=False, shift_mean=True, skip_threshold=100000000.0, splalpha=0.3, splbeta=0, split_batch=1, splval=2, template='.', test_every=1000, test_only=False, weight_decay=0)
    

    My evaluation results were;

      [Set5 x4]     PSNR: 32.076 (Best: 32.134 @epoch 268)  ssim=0.896102
      [Set14 x4]    PSNR: 28.535 (Best: 28.568 @epoch 267)  ssim=0.785463
      [B100 x4]     PSNR: 27.539 (Best: 27.547 @epoch 257)  ssim=0.743243
      [Urban100 x4] PSNR: 25.956 (Best: 25.961 @epoch 293)  ssim=0.785183
      [DIV2K x4]    PSNR: 28.897 (Best: 28.903 @epoch 257)  ssim=0.837567
    

    Here is what the images look like, as the epochs change, for just a few epochs. From left to right, these are sr[0], hr[0], sr[0]*weight, hr[0]*weight.

    Picture1

    Is this about what you'd expect? They just seemed a little noisier to me than, e.g., Figure 2 in the paper. I can also try the training commands that you used in #1; the x2 case is running now, it looks like it'll take about 2.5 days...

    opened by drcdr 1
Owner
Elon Lin
Elon Lin
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 8, 2023
Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Unsupervised Contrastive Learning of Sound Event Representations This repository contains the code for the following paper. If you use this code or pa

Eduardo Fonseca 81 Dec 22, 2022
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (ι™ˆδΈ‰ε…ƒ) 81 Nov 28, 2022
Official implementation of FCL-taco2: Fast, Controllable and Lightweight version of Tacotron2 @ ICASSP 2021

FCL-Taco2: Towards Fast, Controllable and Lightweight Text-to-Speech synthesis (ICASSP 2021) Paper | Demo Block diagram of FCL-taco2, where the decode

Disong Wang 39 Sep 28, 2022
This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

LEAP Lab 2 Sep 15, 2022
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

null 20 Jul 29, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Woosung Choi 63 Nov 14, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022