[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

Related tags

Deep Learning TTSR
Overview

TTSR

Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020.

Contents

Introduction

We proposed an approach named TTSR for RefSR task. Compared to SISR, RefSR has an extra high-resolution reference image whose textures can be utilized to help super-resolve low-resolution input.

Contribution

  1. We are one of the first to introduce the transformer architecture into image generation tasks. More specifically, we propose a texture transformer with four closely-related modules for image SR which achieves significant improvements over SOTA approaches.
  2. We propose a novel cross-scale feature integration module for image generation tasks which enables our approach to learn a more powerful feature representation by stacking multiple texture transformers.

Approach overview

Main results

Requirements and dependencies

  • python 3.7 (recommend to use Anaconda)
  • python packages: pip install opencv-python imageio
  • pytorch >= 1.1.0
  • torchvision >= 0.4.0

Model

Pre-trained models can be downloaded from onedrive, baidu cloud(0u6i), google drive.

  • TTSR-rec.pt: trained with only reconstruction loss
  • TTSR.pt: trained with all losses

Quick test

  1. Clone this github repo
git clone https://github.com/FuzhiYang/TTSR.git
cd TTSR
  1. Download pre-trained models and modify "model_path" in test.sh
  2. Run test
sh test.sh
  1. The results are in "save_dir" (default: ./test/demo/output)

Dataset prepare

  1. Download CUFED train set and CUFED test set
  2. Make dataset structure be:
  • CUFED
    • train
      • input
      • ref
    • test
      • CUFED5

Evaluation

  1. Prepare CUFED dataset and modify "dataset_dir" in eval.sh
  2. Download pre-trained models and modify "model_path" in eval.sh
  3. Run evaluation
sh eval.sh
  1. The results are in "save_dir" (default: ./eval/CUFED/TTSR)

Train

  1. Prepare CUFED dataset and modify "dataset_dir" in train.sh
  2. Run training
sh train.sh
  1. The training results are in "save_dir" (default: ./train/CUFED/TTSR)

Citation

@InProceedings{yang2020learning,
author = {Yang, Fuzhi and Yang, Huan and Fu, Jianlong and Lu, Hongtao and Guo, Baining},
title = {Learning Texture Transformer Network for Image Super-Resolution},
booktitle = {CVPR},
year = {2020},
month = {June}
}

Contact

If you meet any problems, please describe them in issues or contact:

Comments
  • Reference images in Sun80 dataset

    Reference images in Sun80 dataset

    Hi, thanks for sharing you great work! You paper said that "Sun80 contains 80 natural images, each paired with several reference images". May I ask that when you test on Sun80 dataset, how do you choose reference images for each input image? I can not tell apart which image is another image's reference image through their names. Besides, since there are several reference images for each image, do you just randomly sample one reference or use all of the references?

    opened by SkyeLu 11
  • Memory requirement details

    Memory requirement details

    Running test with image of size 600x600 both LR and Ref Image and getting memory issues using cpu mode. Works successfully till 500x500. Any ideas of the issue or provide more details on the memory requirements?

    torch version 1.3.1. System available memory 60g.

    RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 13271040000 bytes. Error code 12 (Cannot allocate memory)

    opened by trideeprath 10
  • Fail to replicate PSNR of CUFED on paper

    Fail to replicate PSNR of CUFED on paper

    Hello, thanks for sharing your great work. I try to replicate PSNR (27.09) in Table 1 of the paper. As shown in Table 4, PSNR in L1 of CUFED5 (_1.png) is 26.99. I think that 27.09 is using multiple references (_1.png, *_2.png, *_3.png, *_4.png, *_5.png) without a second thought. Thus, I pad all the refs to size [500, 500] and vertically concat to be one big ref [2500, 500]. However, no matter which padding type is used, the PSNR is around 26.4, not to reach 27.09. Could you tell me some details to help?

    opened by wdmwhh 6
  • about the eval result, little smaller than the paper's, why?

    about the eval result, little smaller than the paper's, why?

    [root@A01-R04-I220-17 TTSR]# sh eval.sh [2020-06-23 15:19:04,069] - [trainer.py file line:48] - INFO: load_model_path: ./TTSR-rec.pt [2020-06-23 15:19:04,143] - [trainer.py file line:121] - INFO: Epoch 0 evaluation process... [2020-06-23 15:20:40,578] - [trainer.py file line:150] - INFO: Ref PSNR (now): 26.991 SSIM (now): 0.8003 [2020-06-23 15:20:40,580] - [trainer.py file line:158] - INFO: Ref PSNR (max): 26.991 (0) SSIM (max): 0.8003 (0) [2020-06-23 15:20:40,580] - [trainer.py file line:160] - INFO: Evaluation over. [root@A01-R04-I220-17 TTSR]# sh eval.sh [2020-06-23 15:16:24,885] - [trainer.py file line:48] - INFO: load_model_path: ./TTSR.pt [2020-06-23 15:16:24,948] - [trainer.py file line:121] - INFO: Epoch 0 evaluation process... [2020-06-23 15:17:21,531] - [trainer.py file line:150] - INFO: Ref PSNR (now): 25.402 SSIM (now): 0.7600 [2020-06-23 15:17:21,532] - [trainer.py file line:158] - INFO: Ref PSNR (max): 25.402 (0) SSIM (max): 0.7600 (0) [2020-06-23 15:17:21,532] - [trainer.py file line:160] - INFO: Evaluation over.

    opened by robotzheng 6
  • Can you provide training log?

    Can you provide training log?

    Hi,

    First of all I wanted to thank you for your work, the code is rather clean, easy to read and well structured, just like the paper. Good job! I am trying to write this model using Tensorflow. I would have liked to have the training log file (train.log); to make sure that the losses I get are consistent. I used the loss weights you specified in the repository; and I have after 20 epochs: reconstruction_loss: 0.0897 - transferal_perceptual_loss: 1.1900 - d_loss: -30.2180 - psnr: 25.3971 - ssim: 0.6679 - perceptual_loss: 47.2544 - adversarial_loss: 0.0534 - total_loss: 48.5875 As you can see the losses are not at all in the same value range; and since the perceptual loss represents more than 97% of the total loss, I doubt that the other losses will have a real impact during training; especially the adversarial_loss which has a factor of 1e3 with the perceptual loss. It would just be great if you could log your training so that I can adjust the loss weights accordingly to do the same training with Tensorflow.

    Thank you very much,

    opened by oubathAI 5
  • Relevance embedding

    Relevance embedding

    https://github.com/researchmm/TTSR/blob/2836600b20fd8f38e0f1550ab0b87c8d2a2bd276/model/SearchTransfer.py#L32-L33

    As I understood related to the equation 4 (in the main paper), your relevance matrix is calculating normalized inner product. r_{i,j} = norm-inner{q_i,k_j}. ( Query is from the up-sampled low-resolution image and Key is from the down/up-sampled reference image. )

    My understanding is like the below code. Q1. Can you explain why your code is opposed?? (Usually, transformer makes the scores using the equation scores = (Q, K^T)).

    R_lv3 = torch.bmm(lrsr_lv3_unfold, refsr_lv3_unfold) #[N, H*W, Hr*Wr] 
    R_lv3_star, R_lv3_star_arg = torch.max(R_lv3, dim=2) #[N, H*W]
    
    • Reference code from attention-is-all-you-need-PyTorch https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/132907dd272e2cc92e3c10e6c4e783a87ff8893d/transformer/Modules.py#L17
    opened by taeyeop-lee 4
  • question about using detach()

    question about using detach()

    Hi, I am confused. Why do you use detach here? Sicne you have disable gradient of self.vgg19, it shouldn't make any difference if you remove detach. https://github.com/researchmm/TTSR/blob/a3c618d011ef40b0f83004bf9bdbd545e1735ca7/trainer.py#L91 And here, you do not use detach. What is the difference? https://github.com/researchmm/TTSR/blob/a3c618d011ef40b0f83004bf9bdbd545e1735ca7/trainer.py#L89 Thx!

    opened by btwbtm 3
  • The PSNR on Urban100 is different from the paper?

    The PSNR on Urban100 is different from the paper?

    Thank you for your sharing! I have trained RCAN on the CUFED5 training set, and the PSNR I got on Urban100 is 26.15dB. In the paper, the PSNR on Urban100 of RCAN is 25.42dB. When I was training RCAN, I just followed the setting in SRNTT. Maybe there are still some differences between our setting. Could you please tell me your detailed setting for training RCAN?

    opened by yangfan97 3
  • How to propagate relevance embedding?

    How to propagate relevance embedding?

    Thank you for the nice paper!

    In the paper, I saw

    To reduce the consumption of both time and GPU memory, the relevance embedding is only applied to the smallest scale and further propagated to other scales.

    I wonder how the relevance embedding can be propagated to other scales? Are Q and K are upsampled to a higher resolution for computing r_{i,j}? Or is the similarity matrix r_{i,j} is augmented to a higher resolution in some ways?

    Thanks

    opened by htzheng 3
  • run with multi-GPU

    run with multi-GPU

    Can this code run with multi-GPU?

    I got a mistake [2021-05-28 17:07:27,724] - [trainer.py file line:53] - INFO: load_model_path: ./model_00048.pt Traceback (most recent call last): File "main.py", line 43, in t.load(model_path=args.model_path) File "/home/10301007/TTSR_noref_3scale_knn5_rec/trainer.py", line 58, in load self.model.load_state_dict(model_state_dict) File "/home/10101011/anaconda3/envs/SRflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1052, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DataParallel: Unexpected key(s) in state_dict: "MainNet.SFE.conv_head.weight", "MainNet.SFE.conv_head.bias", "MainNet.SFE.RBs.0.conv1.weight", "MainNet.SFE.RBs.0.conv1.bias", "MainNet.SFE.RBs.0.conv2.weight", "MainNet.SFE.RBs.0.conv2.bias", "MainNet.SFE.RBs.1.conv1.weight", "MainNet.SFE.RBs.1.conv1.bias", "MainNet.SFE.RBs.1.conv2.weight", "MainNet.SFE.RBs.1.conv2.bias", "MainNet.SFE.RBs.2.conv1.weight", "MainNet.SFE.RBs.2.conv1.bias", "MainNet.SFE.RBs.2.conv2.weight", "MainNet.SFE.RBs.2.conv2.bias", "MainNet.SFE.RBs.3.conv1.weight", "MainNet.SFE.RBs.3.conv1.bias", "MainNet.SFE.RBs.3.conv2.weight", "MainNet.SFE.RBs.3.conv2.bias", "MainNet.SFE.RBs.4.conv1.weight", "MainNet.SFE.RBs.4.conv1.bias", "MainNet.SFE.RBs.4.conv2.weight", "MainNet.SFE.RBs.4.conv2.bias", "MainNet.SFE.RBs.5.conv1.weight", "MainNet.SFE.RBs.5.conv1.bias", "MainNet.SFE.RBs.5.conv2.weight", "MainNet.SFE.RBs.5.conv2.bias", "MainNet.SFE.RBs.6.conv1.weight", "MainNet.SFE.RBs.6.conv1.bias", "MainNet.SFE.RBs.6.conv2.weight", "MainNet.SFE.RBs.6.conv2.bias", "MainNet.SFE.RBs.7.conv1.weight", "MainNet.SFE.RBs.7.conv1.bias", "MainNet.SFE.RBs.7.conv2.weight", "MainNet.SFE.RBs.7.conv2.bias", "MainNet.SFE.RBs.8.conv1.weight", "MainNet.SFE.RBs.8.conv1.bias", "MainNet.SFE.RBs.8.conv2.weight", "MainNet.SFE.RBs.8.conv2.bias", "MainNet.SFE.RBs.9.conv1.weight", "MainNet.SFE.RBs.9.conv1.bias", "MainNet.SFE.RBs.9.conv2.weight", "MainNet.SFE.RBs.9.conv2.bias", "MainNet.SFE.RBs.10.conv1.weight", "MainNet.SFE.RBs.10.conv1.bias", "MainNet.SFE.RBs.10.conv2.weight", "MainNet.SFE.RBs.10.conv2.bias", "MainNet.SFE.RBs.11.conv1.weight", "MainNet.SFE.RBs.11.conv1.bias", "MainNet.SFE.RBs.11.conv2.weight", "MainNet.SFE.RBs.11.conv2.bias", "MainNet.SFE.RBs.12.conv1.weight", "MainNet.SFE.RBs.12.conv1.bias", "MainNet.SFE.RBs.12.conv2.weight", "MainNet.SFE.RBs.12.conv2.bias", "MainNet.SFE.RBs.13.conv1.weight", "MainNet.SFE.RBs.13.conv1.bias", "MainNet.SFE.RBs.13.conv2.weight", "MainNet.SFE.RBs.13.conv2.bias", "MainNet.SFE.RBs.14.conv1.weight", "MainNet.SFE.RBs.14.conv1.bias", "MainNet.SFE.RBs.14.conv2.weight", "MainNet.SFE.RBs.14.conv2.bias", "MainNet.SFE.RBs.15.conv1.weight", "MainNet.SFE.RBs.15.conv1.bias", "MainNet.SFE.RBs.15.conv2.weight", "MainNet.SFE.RBs.15.conv2.bias", "MainNet.SFE.conv_tail.weight", "MainNet.SFE.conv_tail.bias", "MainNet.conv11_head.weight", "MainNet.conv11_head.bias", "MainNet.conv11_mid.weight", "MainNet.conv11_mid.bias", "MainNet.RB11.0.conv1.weight", "MainNet.RB11.0.conv1.bias", "MainNet.RB11.0.conv2.weight", "MainNet.RB11.0.conv2.bias", "MainNet.RB11.1.conv1.weight", "MainNet.RB11.1.conv1.bias", "MainNet.RB11.1.conv2.weight", "MainNet.RB11.1.conv2.bias", "MainNet.RB11.2.conv1.weight", "MainNet.RB11.2.conv1.bias", "MainNet.RB11.2.conv2.weight", "MainNet.RB11.2.conv2.bias", "MainNet.RB11.3.conv1.weight", "MainNet.RB11.3.conv1.bias", "MainNet.RB11.3.conv2.weight", "MainNet.RB11.3.conv2.bias", "MainNet.RB11.4.conv1.weight", "MainNet.RB11.4.conv1.bias", "MainNet.RB11.4.conv2.weight", "MainNet.RB11.4.conv2.bias", "MainNet.RB11.5.conv1.weight", "MainNet.RB11.5.conv1.bias", "MainNet.RB11.5.conv2.weight", "MainNet.RB11.5.conv2.bias", "MainNet.RB11.6.conv1.weight", "MainNet.RB11.6.conv1.bias", "MainNet.RB11.6.conv2.weight", "MainNet.RB11.6.conv2.bias", "MainNet.RB11.7.conv1.weight", "MainNet.RB11.7.conv1.bias", "MainNet.RB11.7.conv2.weight", "MainNet.RB11.7.conv2.bias", "MainNet.RB11.8.conv1.weight", "MainNet.RB11.8.conv1.bias", "MainNet.RB11.8.conv2.weight", "MainNet.RB11.8.conv2.bias", "MainNet.RB11.9.conv1.weight", "MainNet.RB11.9.conv1.bias", "MainNet.RB11.9.conv2.weight", "MainNet.RB11.9.conv2.bias", "MainNet.RB11.10.conv1.weight", "MainNet.RB11.10.conv1.bias", "MainNet.RB11.10.conv2.weight", "MainNet.RB11.10.conv2.bias", "MainNet.RB11.11.conv1.weight", "MainNet.RB11.11.conv1.bias", "MainNet.RB11.11.conv2.weight", "MainNet.RB11.11.conv2.bias", "MainNet.RB11.12.conv1.weight", "MainNet.RB11.12.conv1.bias", "MainNet.RB11.12.conv2.weight", "MainNet.RB11.12.conv2.bias", "MainNet.RB11.13.conv1.weight", "MainNet.RB11.13.conv1.bias", "MainNet.RB11.13.conv2.weight", "MainNet.RB11.13.conv2.bias", "MainNet.RB11.14.conv1.weight", "MainNet.RB11.14.conv1.bias", "MainNet.RB11.14.conv2.weight", "MainNet.RB11.14.conv2.bias", "MainNet.RB11.15.conv1.weight", "MainNet.RB11.15.conv1.bias", "MainNet.RB11.15.conv2.weight", "MainNet.RB11.15.conv2.bias", "MainNet.conv11_tail.weight", "MainNet.conv11_tail.bias", "MainNet.conv12.weight", "MainNet.conv12.bias", "MainNet.conv22_head.weight", "MainNet.conv22_head.bias", "MainNet.conv22_mid.weight", "MainNet.conv22_mid.bias", "MainNet.ex12.conv12.weight", "MainNet.ex12.conv12.bias", "MainNet.ex12.conv21.weight", "MainNet.ex12.conv21.bias", "MainNet.ex12.conv_merge1.weight", "MainNet.ex12.conv_merge1.bias", "MainNet.ex12.conv_merge2.weight", "MainNet.ex12.conv_merge2.bias", "MainNet.RB21.0.conv1.weight", "MainNet.RB21.0.conv1.bias", "MainNet.RB21.0.conv2.weight", "MainNet.RB21.0.conv2.bias", "MainNet.RB21.1.conv1.weight", "MainNet.RB21.1.conv1.bias", "MainNet.RB21.1.conv2.weight", "MainNet.RB21.1.conv2.bias", "MainNet.RB21.2.conv1.weight", "MainNet.RB21.2.conv1.bias", "MainNet.RB21.2.conv2.weight", "MainNet.RB21.2.conv2.bias", "MainNet.RB21.3.conv1.weight", "MainNet.RB21.3.conv1.bias", "MainNet.RB21.3.conv2.weight", "MainNet.RB21.3.conv2.bias", "MainNet.RB21.4.conv1.weight", "MainNet.RB21.4.conv1.bias", "MainNet.RB21.4.conv2.weight", "MainNet.RB21.4.conv2.bias", "MainNet.RB21.5.conv1.weight", "MainNet.RB21.5.conv1.bias", "MainNet.RB21.5.conv2.weight", "MainNet.RB21.5.conv2.bias", "MainNet.RB21.6.conv1.weight", "MainNet.RB21.6.conv1.bias", "MainNet.RB21.6.conv2.weight", "MainNet.RB21.6.conv2.bias", "MainNet.RB21.7.conv1.weight", "MainNet.RB21.7.conv1.bias", "MainNet.RB21.7.conv2.weight", "MainNet.RB21.7.conv2.bias", "MainNet.RB22.0.conv1.weight", "MainNet.RB22.0.conv1.bias", "MainNet.RB22.0.conv2.weight", "MainNet.RB22.0.conv2.bias", "MainNet.RB22.1.conv1.weight", "MainNet.RB22.1.conv1.bias", "MainNet.RB22.1.conv2.weight", "MainNet.RB22.1.conv2.bias", "MainNet.RB22.2.conv1.weight", "MainNet.RB22.2.conv1.bias", "MainNet.RB22.2.conv2.weight", "MainNet.RB22.2.conv2.bias", "MainNet.RB22.3.conv1.weight", "MainNet.RB22.3.conv1.bias", "MainNet.RB22.3.conv2.weight", "MainNet.RB22.3.conv2.bias", "MainNet.RB22.4.conv1.weight", "MainNet.RB22.4.conv1.bias", "MainNet.RB22.4.conv2.weight", "MainNet.RB22.4.conv2.bias", "MainNet.RB22.5.conv1.weight", "MainNet.RB22.5.conv1.bias", "MainNet.RB22.5.conv2.weight", "MainNet.RB22.5.conv2.bias", "MainNet.RB22.6.conv1.weight", "MainNet.RB22.6.conv1.bias", "MainNet.RB22.6.conv2.weight", "MainNet.RB22.6.conv2.bias", "MainNet.RB22.7.conv1.weight", "MainNet.RB22.7.conv1.bias", "MainNet.RB22.7.conv2.weight", "MainNet.RB22.7.conv2.bias", "MainNet.conv21_tail.weight", "MainNet.conv21_tail.bias", "MainNet.conv22_tail.weight", "MainNet.conv22_tail.bias", "MainNet.conv23.weight", "MainNet.conv23.bias", "MainNet.conv33_head.weight", "MainNet.conv33_head.bias", "MainNet.conv33_mid.weight", "MainNet.conv33_mid.bias", "MainNet.ex123.conv12.weight", "MainNet.ex123.conv12.bias", "MainNet.ex123.conv13.weight", "MainNet.ex123.conv13.bias", "MainNet.ex123.conv21.weight", "MainNet.ex123.conv21.bias", "MainNet.ex123.conv23.weight", "MainNet.ex123.conv23.bias", "MainNet.ex123.conv31_1.weight", "MainNet.ex123.conv31_1.bias", "MainNet.ex123.conv31_2.weight", "MainNet.ex123.conv31_2.bias", "MainNet.ex123.conv32.weight", "MainNet.ex123.conv32.bias", "MainNet.ex123.conv_merge1.weight", "MainNet.ex123.conv_merge1.bias", "MainNet.ex123.conv_merge2.weight", "MainNet.ex123.conv_merge2.bias", "MainNet.ex123.conv_merge3.weight", "MainNet.ex123.conv_merge3.bias", "MainNet.RB31.0.conv1.weight", "MainNet.RB31.0.conv1.bias", "MainNet.RB31.0.conv2.weight", "MainNet.RB31.0.conv2.bias", "MainNet.RB31.1.conv1.weight", "MainNet.RB31.1.conv1.bias", "MainNet.RB31.1.conv2.weight", "MainNet.RB31.1.conv2.bias", "MainNet.RB31.2.conv1.weight", "MainNet.RB31.2.conv1.bias", "MainNet.RB31.2.conv2.weight", "MainNet.RB31.2.conv2.bias", "MainNet.RB31.3.conv1.weight", "MainNet.RB31.3.conv1.bias", "MainNet.RB31.3.conv2.weight", "MainNet.RB31.3.conv2.bias", "MainNet.RB32.0.conv1.weight", "MainNet.RB32.0.conv1.bias", "MainNet.RB32.0.conv2.weight", "MainNet.RB32.0.conv2.bias", "MainNet.RB32.1.conv1.weight", "MainNet.RB32.1.conv1.bias", "MainNet.RB32.1.conv2.weight", "MainNet.RB32.1.conv2.bias", "MainNet.RB32.2.conv1.weight", "MainNet.RB32.2.conv1.bias", "MainNet.RB32.2.conv2.weight", "MainNet.RB32.2.conv2.bias", "MainNet.RB32.3.conv1.weight", "MainNet.RB32.3.conv1.bias", "MainNet.RB32.3.conv2.weight", "MainNet.RB32.3.conv2.bias", "MainNet.RB33.0.conv1.weight", "MainNet.RB33.0.conv1.bias", "MainNet.RB33.0.conv2.weight", "MainNet.RB33.0.conv2.bias", "MainNet.RB33.1.conv1.weight", "MainNet.RB33.1.conv1.bias", "MainNet.RB33.1.conv2.weight", "MainNet.RB33.1.conv2.bias", "MainNet.RB33.2.conv1.weight", "MainNet.RB33.2.conv1.bias", "MainNet.RB33.2.conv2.weight", "MainNet.RB33.2.conv2.bias", "MainNet.RB33.3.conv1.weight", "MainNet.RB33.3.conv1.bias", "MainNet.RB33.3.conv2.weight", "MainNet.RB33.3.conv2.bias", "MainNet.conv31_tail.weight", "MainNet.conv31_tail.bias", "MainNet.conv32_tail.weight", "MainNet.conv32_tail.bias", "MainNet.conv33_tail.weight", "MainNet.conv33_tail.bias", "MainNet.merge_tail.conv13.weight", "MainNet.merge_tail.conv13.bias", "MainNet.merge_tail.conv23.weight", "MainNet.merge_tail.conv23.bias", "MainNet.merge_tail.conv_merge.weight", "MainNet.merge_tail.conv_merge.bias", "MainNet.merge_tail.conv_tail1.weight", "MainNet.merge_tail.conv_tail1.bias", "MainNet.merge_tail.conv_tail2.weight", "MainNet.merge_tail.conv_tail2.bias", "LTE.slice1.0.weight", "LTE.slice1.0.bias", "LTE.slice2.2.weight", "LTE.slice2.2.bias", "LTE.slice2.5.weight", "LTE.slice2.5.bias", "LTE.slice3.7.weight", "LTE.slice3.7.bias", "LTE.slice3.10.weight", "LTE.slice3.10.bias", "LTE.sub_mean.weight", "LTE.sub_mean.bias".

    opened by xuboming8 2
  • Regarding low-resolution image restoration without using high-resolution images (ref)

    Regarding low-resolution image restoration without using high-resolution images (ref)

    Hello author! I would like to ask the following about the test, if you do not use the reference image (ref), but only input the ground resolution image (lr), can image restoration be completed? If you can hope that you can explain the specific method, thanks!

    opened by 2805413893 2
Owner
Multimedia Research
Multimedia Research at Microsoft Research Asia
Multimedia Research
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 104 Jan 5, 2023
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

null 81 Dec 28, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

null 7 Feb 10, 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

null 150 Dec 26, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
"3D Human Texture Estimation from a Single Image with Transformers", ICCV 2021

Texformer: 3D Human Texture Estimation from a Single Image with Transformers This is the official implementation of "3D Human Texture Estimation from

XiangyuXu 193 Dec 5, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

null 83 Jan 1, 2023