git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Overview

Investigating Loss Functions for Extreme Super-Resolution

NTIRE 2020 Perceptual Extreme Super-Resolution Submission.

Our method ranked first and second in PI and LPIPS measures respectively.

[Paper]

Dependency

  • Python 3.6
  • PyTorch 1.2
  • numpy
  • pillow
  • tqdm

Test

  1. Clone this repo.
git clone https://github.com/kingsj0405/ciplab-NTIRE-2020
  1. Download pre-trained model and place it to ./model.pth.
  1. Place low-resolution input images to ./input.

  2. Run.

python test.py

If your GPU memory lacks, please try with option -n 3 or a larger number.

  1. Check your results in ./output.

Train

  1. Clone this repo.
git clone https://github.com/kingsj0405/ciplab-NTIRE-2020
  1. Prepare training png images into ./train.

  2. Prepare validation png images into ./val.

  3. Open train.py and modify user parameters in L22.

  4. Run.

python train.py

If your GPU memory lacks, please try with lower batch size or patch size.

BibTeX

@InProceedings{jo2020investigating,
   author = {Jo, Younghyun and Yang, Sejong and Joo Kim, Seon},
   title = {Investigating Loss Functions for Extreme Super-Resolution},
   booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
   month = {June},
   year = {2020}
}

External codes from

Comments
  • question about code

    question about code

    in LPIPS/network_basic.py, for line 96,why is:

    if(self.pnet_tune): self.net = net_type(pretrained=not self.pnet_rand,requires_grad=True) else: self.net = [net_type(pretrained=not self.pnet_rand,requires_grad=**True**),] is not it: if(self.pnet_tune): self.net = net_type(pretrained=not self.pnet_rand,requires_grad=True) else: self.net = [net_type(pretrained=not self.pnet_rand,requires_grad=**False**),]

    opened by scutlrr 3
  • Question about Discriminator Architecture

    Question about Discriminator Architecture

    Nice implementation! Why did you comment all the lines related to skip connection in both DBlock and GBlock? Are naiive conv layers performing better than residual conv blocks?

    opened by ferric123 1
  • Training weight problem

    Training weight problem

    Hello, we used your project code to test our own data. Several problems occurred during the training process. Is the model constant in test.py? There is also the problem of train.py data storage. Do you want to replace the data position and train train.py twice?

    opened by xiaoyi-st 1
  • Datashet load question

    Datashet load question

    Using the DIV2K traing set(800) to train the mode,but the code can not normal load the dataset.Always stop in this state.Why?(windows 10)

    0%| | 0/150000 [00:00<?, ?it/s]Found 800 trainnig images ===> Training start

    opened by duweihua 1
  • Controlling the training direction

    Controlling the training direction

    I'm extremely impressed with your results, it's much better than anything I've used before, do you think this project could be suitable for other restoration tasks as well such as deblurring/noise removal? The readme mentions that the training procedure requires to add all pictures into one folder (ie. "./train") and run train.py, however originally ESRGAN required at least two folders (LR and HR) to guide the training in the desired direction (A to B/LR to HR logic), from experience I learned that it was possible to guide it to fix other kind of problems simply by adding the problematic pictures in the LR folder and the desired outcome in the HR folder, even thought it wasn't originally made for fixing many of those very specific problems (ie. artifacts) it did a pretty good job at them, so I was wondering if this project could potentially do an even better job at this? Any opinions?

    opened by AndroYD84 1
  • LPips loss lambda value

    LPips loss lambda value

    Hello,

    Really enjoyed your paper, a very interesting loss, guys!

    From your paper, you claim to use the LPIPS-loss scale parameter to be 1e-6, while in source code, on line41 of train.py you set it to 1e-3.

    I wondered, whether this code was used in continuous training or if it is an error within the code/paper?

    best regards

    opened by pepinu 0
  • About the loss_LPIPSs

    About the loss_LPIPSs

    Thank you for your excellent job about the super resolutions. I am very interested in the novel perceptual loss. In the project ,the loss_LPIPSs is defined as follows: loss_LPIPSs, _ = model_LPIPS.forward_pair(batch_H2-1, batch_S2-1) I am a little puzzled about the "batch_*2 - 1" , Why used like this ? Thank you very much for your help.

    opened by lixinghpu 0
  • Loss Curves

    Loss Curves

    Could you please share your training loss curves?

    I am trying to replicate the results but no success so far. Following are my training loss curves for 2 different runs, could you please comment on them, if there is something off with them?

    Cutmix MSE loss:

    image

    l_d_fake = loss_D_Enc_S + loss_D_Dec_S:

    image

    l_d_real = loss_D_Enc_H + loss_D_Dec_H

    image

    Feature loss:

    image image

    opened by sarvghotra 1
  • About train parameters

    About train parameters

    Thx author! This is a great work! I have some wrong with origin code First: what mean with train 4:Open train.py and modify user parameters in L22. L22????????? Second: With origin code,I have model_G_i150000.pth.But the test result(Set5) is far from paper. Screen Shot 2021-01-30 at 3 43 43 AM Is a wrong with L22?????

    opened by flybiubiu 6
  • Abort Validate

    Abort Validate

    This is a nice project!Thx! But I train it with original code(Validate is 0150.png,0300.png,0450.png-----,not 1501.png,1502.png---)

    _return torch._C._nn.upsample_nearest2d(input, output_size, sfl[0], sfl[1]) RuntimeError: CUDA out of memory. Tried to allocate 7.08 GiB (GPU 0; 11.93 GiB total capacity; 4.87 GiB already allocated; 5.78 GiB free; 5.57 GiB reserved in total by PyTorch) 0%| | 99/150000 [01:31<38:27:38, 1.08it/s]__

    The paper with TITAN XP(12G).I also have 12G. Is something wrong?Or I indeed need a more memory GPU?(Only with val stage) THX again!

    opened by flybiubiu 1
Owner
Sejong Yang
Student, Researcher, Developer
Sejong Yang
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 816 Sep 13, 2022
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation

Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p

Sam 77 Sep 14, 2022
Woosung Choi 61 Sep 7, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 40 Feb 27, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 133 Sep 6, 2022
The official implementation of Equalization Loss v1 & v2 (CVPR 2020, 2021) based on MMDetection.

The Equalization Losses for Long-tailed Object Detection and Instance Segmentation This repo is official implementation CVPR 2021 paper: Equalization

Jingru Tan 120 Sep 13, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 103 May 30, 2022
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 211 Sep 7, 2022
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

Longguang Wang 294 Sep 22, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 91 Sep 21, 2022
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

MEGVII Research 111 Sep 23, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 127 Sep 17, 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

null 131 Sep 27, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

null 35 Sep 8, 2021
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

null 35 Jul 8, 2022
git《Commonsense Knowledge Base Completion with Structural and Semantic Context》(AAAI 2020) GitHub: [fig1]

Commonsense Knowledge Base Completion with Structural and Semantic Context Code for the paper Commonsense Knowledge Base Completion with Structural an

AI2 94 Aug 31, 2022
git《Joint Entity and Relation Extraction with Set Prediction Networks》(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

null 126 Sep 28, 2022
git《USD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation》(2020) GitHub: [fig2]

USD-Seg This project is an implement of paper USD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation, based on FCOS detector f

Ruolin Ye 80 Sep 16, 2022
《LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification》(AAAI 2021) GitHub:

LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification

null 68 Sep 21, 2022