edge-SR: Super-Resolution For The Masses

Related tags

Text Data & NLP eSR
Overview

edge-SR: Super Resolution For The Masses

Citation

Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses", in IEEE Winter conference on Applications of Computer Vision (WACV), 2022.

BibTeX

@inproceedings{eSR,
    title     = {edge--{SR}: Super--Resolution For The Masses},
    author    = {Navarrete~Michelini, Pablo and Lu, Yunhua and Jiang, Xingqun},
    booktitle = {Proceedings of the {IEEE/CVF} Winter Conference on Applications of Computer Vision ({WACV})},
    month     = {January},
    year      = {2022},
    pages     = {1078--1087},
    url       = {https://arxiv.org/abs/2108.10335}
}

Instructions:

  • Place input images in input directory (provided as empty directory). Color images will be converted to grayscale.

  • To upscale images run: python run.py.

    Output images will come out in output directory.

  • The GPU number and model file can be changed in run.py (in comment "CHANGE HERE").

Requirements:

  • Python 3, PyTorch, NumPy, Pillow, OpenCV

Experiment results

  • The data directory contains the file tests.pkl that has the Python dictionary with all our test results on different devices. The following sample code shows how to read the file:
>>> import pickle
>>> test = pickle.load(open('tests.pkl', 'rb'))
>>> test['Bicubic_s2']
    {'psnr_Set5': 33.72849620514912,
     'ssim_Set5': 0.9283912810369976,
     'lpips_Set5': 0.14221979230642318,
     'psnr_Set14': 30.286027790636204,
     'ssim_Set14': 0.8694934108301432,
     'lpips_Set14': 0.19383049915943826,
     'psnr_BSDS100': 29.571233006609656,
     'ssim_BSDS100': 0.8418117904964167,
     'lpips_BSDS100': 0.26246454380452633,
     'psnr_Urban100': 26.89378248655882,
     'ssim_Urban100': 0.8407461069831571,
     'lpips_Urban100': 0.21186692919582129,
     'psnr_Manga109': 30.850672809780587,
     'ssim_Manga109': 0.9340133711400112,
     'lpips_Manga109': 0.102985977955641,
     'parameters': 104,
     'speed_AGX': 18.72132628065749,
     'power_AGX': 1550,
     'speed_MaxQ': 632.5429857814075,
     'power_MaxQ': 50,
     'temperature_MaxQ': 76,
     'memory_MaxQ': 2961,
     'speed_RPI': 11.361346064182795,
     'usage_RPI': 372.8714285714285}

The keys of the dictionary identify the name of each model and its hyper--parameters using the following format:

  • Bicubic_s#,
  • eSR-MAX_s#_K#_C#,
  • eSR-TM_s#_K#_C#,
  • eSR-TR_s#_K#_C#,
  • eSR-CNN_s#_C#_D#_S#,
  • ESPCN_s#_D#_S#, or
  • FSRCNN_s#_D#_S#_M#,

where # represents an integer number with the value of the correspondent hyper-parameter. For each model the data of the dictionary contains a second dictionary with the information displayed above. This includes: number of model parameters; image quality metrics PSNR, SSIM and LPIPS measured in 5 different datasets; as well as power, speed, CPU usage, temperature and memory usage for devices AGX (Jetson AGX Xavier), MaxQ (GTX 1080 MaxQ) and RPI (Raspberry Pi 400).

You might also like...
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, Explosion AI 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 French 1.2.3 German 1.2

A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

HA-Edge-Connector - HA Edge Connector For Python
HA-Edge-Connector - HA Edge Connector For Python

HA-Edge-Connector 1. Required a. Smartthings Hub & Homeassistant must be in same

Official Pytorch Implementation of:
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

 Shai-Hulud - A qtile configuration for the (spice) masses
Shai-Hulud - A qtile configuration for the (spice) masses

Shai-Hulud - A qtile configuration for the (spice) masses Installation Notes These dotfiles are set up to use GNU stow for installation. To install, f

Openfe - Alchemical free energy calculations for the masses

The Open Free Energy library Alchemical free energy calculations for the masses.

easyopt is a super simple yet super powerful optuna-based Hyperparameters Optimization Framework that requires no coding.

easyopt is a super simple yet super powerful optuna-based Hyperparameters Optimization Framework that requires no coding.

Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

A framework for joint super-resolution and image synthesis, without requiring real training data
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

CVPR 2021 Challenge on Super-Resolution Space
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Repository for
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

The implementation of ICASSP 2020 paper
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Comments
  • How to output color image

    How to output color image

    Firstly, thank you for the great work!

    I have tried the demo and it outputs result very fast, but the output is grayscale.

    I want to know if I need to output color image, do I need some new models file?

    If yes, do you have plan to release them? Or tell me how to buy?

    Thank you in advance.

    opened by Zhaoxuyu 2
  • Question about training dataset

    Question about training dataset

    Thanks for the wonderful paper. I'd like to ask the follow questions to get better understand about your algorithm.

    1. Do we need the validation set during training? if yes, where does it come from?
    2. Does all the training and measurement process use the image in grayscale?

    Thank you so much in advance.

    opened by haodongyu 1
Owner
Pablo
Pablo
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.8k Dec 27, 2022
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.5k Feb 18, 2021
Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!

Easy-Translate is a script for translating large text files in your machine using the M2M100 models from Facebook/Meta AI. We also privide a script fo

Iker García-Ferrero 41 Dec 15, 2022
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 3.6k Jan 2, 2023
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 2.9k Feb 11, 2021
✨Fast Coreference Resolution in spaCy with Neural Networks

✨ NeuralCoref 4.0: Coreference Resolution in spaCy with Neural Networks. NeuralCoref is a pipeline extension for spaCy 2.1+ which annotates and resolv

Hugging Face 2.6k Jan 4, 2023
:id: A python library for accurate and scalable fuzzy matching, record deduplication and entity-resolution.

Dedupe Python Library dedupe is a python library that uses machine learning to perform fuzzy matching, deduplication and entity resolution quickly on

Dedupe.io 2.9k Feb 17, 2021
✨Fast Coreference Resolution in spaCy with Neural Networks

✨ NeuralCoref 4.0: Coreference Resolution in spaCy with Neural Networks. NeuralCoref is a pipeline extension for spaCy 2.1+ which annotates and resolv

Hugging Face 2.2k Feb 18, 2021
Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, msg systems ag 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 German 1.2.3 Polish 1

msg systems ag 169 Dec 21, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

null 79 Dec 27, 2022