Image Super-Resolution by Neural Texture Transfer

Related tags

Deep Learning SRNTT
Overview

SRNTT: Image Super-Resolution by Neural Texture Transfer

Tensorflow implementation of the paper Image Super-Resolution by Neural Texture Transfer accepted in CVPR 2019. This is a simplified version, where the reference images are used without augmentation, e.g., rotation and scaling.

Project Page

Pytorch Implementation

Contents

Pre-requisites

  • Python 3.6
  • TensorFlow 1.13.1
  • requests 2.21.0
  • pillow 5.4.1
  • matplotlib 3.0.2

Tested on MacOS (Mojave).

Dataset

This repo only provides a small training set of ten input-reference pairs for demo purpose. The input images and reference images are stored in data/train/CUFED/input and data/train/CUFED/ref, respectively. Corresponding input and refernece images are with the same file name. To speed up the training process, patch matching and swapping are performed offline, and the swapped feature maps will be saved to data/train/CUFED/map_321 (see offline_patchMatch_textureSwap.py for more details). If you want to train your own model, please prepare your own training set or download either of the following demo training sets:

11,485 input-reference pairs (size 320x320) extracted from DIV2K.

Each pair is extracted from the same image without overlap but considering scaling and rotation.

$ python download_dataset.py --dataset_name DIV2K
11,871 input-reference pairs (size 160x160) extracted from CUFED.

Each pair is extracted from the similar images, including five degrees of similarity.

$ python download_dataset.py --dataset_name CUFED

This repo includes one grounp of samples from the CUFED5 dataset, where each input image corresponds to five reference images (different from the paper) with different degrees of similarity to the input image. Please download the full dataset by

$ python download_dataset.py --dataset_name CUFED5

Easy Testing

$ sh test.sh

The results will be save to the folder demo_testing_srntt, including the following 6 images:

  • [1/6] HR.png, the original image.

    Original image

  • [2/6] LR.png, the low-resolution (LR) image, downscaling factor 4x.

    LR image

  • [3/6] Bicubic.png, the upscaled image by bicubic interpolation, upscaling factor 4x.

    Bicubic image

  • [4/6] Ref_XX.png, the reference images, indexed by XX.

    Reference image

  • [5/6] Upscale.png, the upscaled image by a pre-trained SR network, upscaling factor 4x.

    Upscaled image

  • [6/6] SRNTT.png, the SR result by SRNTT, upscaling factor 4x.

    Upscaled image

Custom Testing

$ python main.py 
    --is_train              False 
    --input_dir             path/to/input/image/file
    --ref_dir               path/to/ref/image/file
    --result_dir            path/to/result/folder
    --ref_scale             default 1, expected_ref_scale divided by original_ref_scale
    --is_original_image     default True, whether input is original 
    --use_init_model_only   default False, whether use init model, trained with reconstruction loss only
    --use_weight_map        defualt False, whether use weighted model, trained with the weight map.
    --save_dir              path/to/a/specified/model if it exists, otherwise ignor this parameter

Please note that this repo provides two types of pre-trained SRNTT models in SRNTT/models/SRNTT:

  • srntt.npz is trained by all losses, i.e., reconstruction loss, perceptual loss, texture loss, and adversarial loss.
  • srntt_init.npz is trained by only the reconstruction loss, corresponding to SRNTT-l2 in the paper.

To switch between the demo models, please set --use_init_model_only to decide whether use srntt_init.npz.

Easy Training

$ sh train.sh

The CUFED training set will be downloaded automatically. To speed up the training process, patch matching and swapping are conducted to get the swapped feature maps in an offline manner. The models will be saved to demo_training_srntt/model, and intermediate samples will be saved to demo_training_srntt/sample. Parameter settings are save to demo_training_srntt/arguments.txt.

Custom Training

Please first prepare the input and reference images which are squared patches in the same size. In addition, input and reference images should be stored in separated folders, and the correspoinding input and reference images are with the same file name. Please refer to the data/train/CUFED folder for examples. Then, use offline_patchMatch_textureSwap.py to generate the feature maps in ahead.

$ python main.py
    --is_train True
    --save_dir folder/to/save/models
    --input_dir path/to/input/image/folder
    --ref_dir path/to/ref/image/folder
    --map_dir path/to/feature_map/folder
    --batch_size default 9
    --num_epochs default 100
    --input_size default 40, the size of LR patch, i.e., 1/4 of the HR image, set to 80 for the DIV2K dataset
    --use_weight_map defualt False, whether use the weight map that reduces negative effect 
                     from the reference image but may also decrease the sharpness.  

Please refer to main.py for more parameter settings for training.

Test on the custom training model

$ python main.py 
    --is_train              False 
    --input_dir             path/to/input/image/file
    --ref_dir               path/to/ref/image/file
    --result_dir            path/to/result/folder
    --ref_scale             default 1, expected_ref_scale divided by original_ref_scale
    --is_original_image     default True, whether input is original 
    --save_dir              the same as save_dir in training

Acknowledgement

Thanks to Tensorlayer for facilitating the implementation of this demo code. We have include the Tensorlayer 1.5.0 in SRNTT/tensorlayer.

Contact

Zhifei Zhang

Comments
  • How was upsacle.npz file created  before running offine_pathMatch_textureSwap.py

    How was upsacle.npz file created before running offine_pathMatch_textureSwap.py

    In your codes, only after run offline_pathcMatch_textureSawp.py to create corresponding maps, SRNTT net can be trained, I am confused with how upscale.npz file can be created before running offline_pathcMatch_textureSawp.py? could you explain for me, thanks

    opened by xd17 3
  • map_123

    map_123

    Hello, thank you for the open source of the code, I am very interested in your research content, but I have a mistake in training, find the reason, is missing the map_123 file, I want to know how to get the "map_123" file, you Can you share the link to me? look forward to your reply!

    opened by LoveSimons 3
  • [bug report]SRNTT/SRNTT/bicubic_kernel.py: a  bug in def kener(in_length, out_length)

    [bug report]SRNTT/SRNTT/bicubic_kernel.py: a bug in def kener(in_length, out_length)

    SRNTT/SRNTT/bicubic_kernel.py: a bug in method kener(in_length, out_length)

    As the required python version is 3.6, in python3 division ‘/’ betwen no matter integers or floats are both floating-point division, if want integer division, '//' should be used.

    So, in SRNTT/SRNTT/bicubic_kernel.py, the assert sentence of method kernel(in_length, out_length) didn't works. I think it should be write as

    assert in_length >= out_length and in_length // out_length == in_length / out_length

    but not

    assert in_length >= out_length and in_length / out_length == 1.0 * in_length / out_length

    the original code is shown below:

    def kernel(in_length, out_length):  # assume in_length is larger scale  # decide whether a convolution kernel can be constructed  assert in_length >= out_length and in_length / out_length == 1.0 * in_length / out_length

     # decide kernel width  scale = 1.0 * out_length / in_length  kernel_length = 4.0 / scale  ...

    opened by Jam-G 0
  • how to make the code run faster

    how to make the code run faster

    it takes more than 60 seconds for to process an image with resolution 480*270 , it is so slow that not usable in real applications.

    How to make the test process run faster ?

    Thanks !

    opened by Jiakui 0
  • Would you like to provide the original CUFED5 training dateset for us to follow your work?

    Would you like to provide the original CUFED5 training dateset for us to follow your work?

    Hi, we want to follow your work and we wonder if you could provide us with the original CEFED5 training dataset instead of patches. Thanks in advance! Best wishes.

    opened by zwb0 0
  • PSNR measurement

    PSNR measurement

    Hello.

    First, Thank you for your great work!

    I want to know the details on how you measured PSNR and SSIM regarding table 1 and 2 in your paper. On which channel(RGB channel or Y channel(Luminance)) did you measure those metrics?

    Also, could you tell me which reference image you used for measuring PSNR /SSIM for table 1's RefSR method? On table 2 there are 5 PSNR measurements on each reference images(L1~L5), but I see different number(26.24 for SRNTT-l2) on CUFED dataset

    Looking forward to your reply, Thank you!

    opened by sgm0526 2
  • Multi-Scale in Feature Swapping

    Multi-Scale in Feature Swapping

    How do you get multi-scale feature maps. Is it because the different pooling ('pool1', 'pool2') in vgg? And, I realize offline_patchMatch_textureSwap.py only operates on the 'relu3_1' feature maps, is that true? Looking forward to your reply.

    opened by Flaick 0
  • How to validate if the feature swapping is going well?

    How to validate if the feature swapping is going well?

    I'm trying to reproduce your great work using the other framework.

    I think generating the feature-swapped maps is more critical for reproducing than the network architectures and loss functions etc. From the perspective, I want to validate my feature swapping results.

    Do you have any suggestions? Note: I think the direct comparison between your and my results is not reasonable because of the difference of VGG weights and value range etc.

    opened by S-aiueo32 0
Owner
Zhifei Zhang
Zhifei Zhang
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
"3D Human Texture Estimation from a Single Image with Transformers", ICCV 2021

Texformer: 3D Human Texture Estimation from a Single Image with Transformers This is the official implementation of "3D Human Texture Estimation from

XiangyuXu 193 Dec 5, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

null 83 Jan 1, 2023
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

null 11 May 19, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 1, 2023
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

LiangWei Jiang 2.5k Jan 2, 2023
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022