Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Overview

Residual Dense Network for Image Super-Resolution

This repository is for RDN introduced in the following paper

Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu, "Residual Dense Network for Image Super-Resolution", CVPR 2018 (spotlight), [arXiv] [Results@BaiduDrive], [Results@GoodleDrive]

Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu, "Residual Dense Network for Image Restoration", arXiv 2018, [arXiv]

The code is built on EDSR (Torch) and tested on Ubuntu 14.04 environment (Torch7, CUDA8.0, cuDNN5.1) with Titan X/1080Ti/Xp GPUs.

Other implementations: PyTorch_version has been implemented by Nguyễn Trần Toàn ([email protected]) and merged into EDSR_PyTorch. TensorFlow_version by hengchuan.

Contents

  1. Introduction
  2. Train
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements

Introduction

A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.

RDB Figure 1. Residual dense block (RDB) architecture. RDN Figure 2. The architecture of our proposed residual dense network (RDN).

Train

Prepare training data

  1. Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab.

  2. Place all the HR images in 'Prepare_TrainData/DIV2K/DIV2K_HR'.

  3. Run 'Prepare_TrainData_HR_LR_BI/BD/DN.m' in matlab to generate LR images for BI, BD, and DN models respectively.

  4. Run 'th png_to_t7.lua' to convert each .png image to .t7 file in new folder 'DIV2K_decoded'.

  5. Specify the path of 'DIV2K_decoded' to '-datadir' in 'RDN_TrainCode/code/opts.lua'.

For more informaiton, please refer to EDSR(Torch).

Begin to train

  1. (optional) Download models for our paper and place them in '/RDN_TrainCode/experiment/model'.

    All the models can be downloaded from Dropbox or Baidu.

  2. Cd to 'RDN_TrainCode/code', run the following scripts to train models.

    You can use scripts in file 'TrainRDN_scripts' to train models for our paper.

    # BI, scale 2, 3, 4
    # BIX2F64D18C6G64P48, input=48x48, output=96x96
    th main.lua -scale 2 -netType RDN -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 96 -dataset div2k -datatype t7  -DownKernel BI -splitBatch 4 -trainOnly true
    
    # BIX3F64D18C6G64P32, input=32x32, output=96x96, fine-tune on RDN_BIX2.t7
    th main.lua -scale 3 -netType resnet_cu -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 96 -dataset div2k -datatype t7  -DownKernel BI -splitBatch 4 -trainOnly true  -preTrained ../experiment/model/RDN_BIX2.t7
    
    # BIX4F64D18C6G64P32, input=32x32, output=128x128, fine-tune on RDN_BIX2.t7
    th main.lua -scale 4 -nGPU 1 -netType resnet_cu -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 128 -dataset div2k -datatype t7  -DownKernel BI -splitBatch 4 -trainOnly true -nEpochs 1000 -preTrained ../experiment/model/RDN_BIX2.t7 
    
    # BD, scale 3
    # BDX3F64D18C6G64P32, input=32x32, output=96x96, fine-tune on RDN_BIX3.t7
    th main.lua -scale 3 -nGPU 1 -netType resnet_cu -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 96 -dataset div2k -datatype t7  -DownKernel BD -splitBatch 4 -trainOnly true -nEpochs 200 -preTrained ../experiment/model/RDN_BIX3.t7
    
    # DN, scale 3
    # DNX3F64D18C6G64P32, input=32x32, output=96x96, fine-tune on RDN_BIX3.t7
    th main.lua -scale 3 -nGPU 1 -netType resnet_cu -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 96 -dataset div2k -datatype t7  -DownKernel DN -splitBatch 4 -trainOnly true  -nEpochs 200 -preTrained ../experiment/model/RDN_BIX3.t7

    Only RDN_BIX2.t7 was trained using 48x48 input patches. All other models were trained using 32x32 input patches in order to save training time. However, smaller input patch size in training would lower the performance to some degree. We also set '-trainOnly true' to save GPU memory.

Test

Quick start

  1. Download models for our paper and place them in '/RDN_TestCode/model'.

    All the models can be downloaded from Dropbox or Baidu.

  2. Run 'TestRDN.lua'

    You can use scripts in file 'TestRDN_scripts' to produce results for our paper.

    # No self-ensemble: RDN
    # BI degradation model, X2, X3, X4
    th TestRDN.lua -model RDN_BIX2 -degradation BI -scale 2 -selfEnsemble false -dataset Set5
    th TestRDN.lua -model RDN_BIX3 -degradation BI -scale 3 -selfEnsemble false -dataset Set5
    th TestRDN.lua -model RDN_BIX4 -degradation BI -scale 4 -selfEnsemble false -dataset Set5
    # BD degradation model, X3
    th TestRDN.lua -model RDN_BDX3 -degradation BD -scale 3 -selfEnsemble false -dataset Set5
    # DN degradation model, X3
    th TestRDN.lua -model RDN_DNX3 -degradation DN -scale 3 -selfEnsemble false -dataset Set5
    
    
    # With self-ensemble: RDN+
    # BI degradation model, X2, X3, X4
    th TestRDN.lua -model RDN_BIX2 -degradation BI -scale 2 -selfEnsemble true -dataset Set5
    th TestRDN.lua -model RDN_BIX3 -degradation BI -scale 3 -selfEnsemble true -dataset Set5
    th TestRDN.lua -model RDN_BIX4 -degradation BI -scale 4 -selfEnsemble true -dataset Set5
    # BD degradation model, X3
    th TestRDN.lua -model RDN_BDX3 -degradation BD -scale 3 -selfEnsemble true -dataset Set5
    # DN degradation model, X3
    th TestRDN.lua -model RDN_DNX3 -degradation DN -scale 3 -selfEnsemble true -dataset Set5

The whole test pipeline

  1. Prepare test data.

    Place the original test sets (e.g., Set5, other test sets are available from GoogleDrive or Baidu) in 'OriginalTestData'.

    Run 'Prepare_TestData_HR_LR.m' in Matlab to generate HR/LR images with different degradation models.

  2. Conduct image SR.

    See Quick start

  3. Evaluate the results.

    Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.

Results

PSNR_SSIM_BI Table 1. Benchmark results with BI degradation model. Average PSNR/SSIM values for scaling factor ×2, ×3, and ×4.

PSNR_SSIM_BD_DN Table 2. Benchmark results with BD and DN degradation models. Average PSNR/SSIM values for scaling factor ×3.

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

@inproceedings{zhang2018residual,
    title={Residual Dense Network for Image Super-Resolution},
    author={Zhang, Yulun and Tian, Yapeng and Kong, Yu and Zhong, Bineng and Fu, Yun},
    booktitle={CVPR},
    year={2018}
}

@article{zhang2020rdnir,
    title={Residual Dense Network for Image Restoration},
    author={Zhang, Yulun and Tian, Yapeng and Kong, Yu and Zhong, Bineng and Fu, Yun},
    journal={TPAMI},
    year={2020}
}

Acknowledgements

This code is built on EDSR (Torch). We thank the authors for sharing their codes of EDSR Torch version and PyTorch version.

Comments
  • epochs in trainning

    epochs in trainning

    Hi, thanks for you great work. I find nEpochs is set as 10000 in the opts.lua, does it mean that we need 10000 epochs to get the results in the paper ?

    opened by sdlpkxd 5
  • About the validation result when I train just 1 epoch

    About the validation result when I train just 1 epoch

    Hello. I have a question.

    I want to train the network by using SR291 dataset. I tried to change the parameters and code. However, when I train with 1 epoch, the validation result is too good (e.g. 37.0581). It seems to be used some other model which I didn't train. So, when I want to make new model with other dataset, where should I change?

    opened by matsuneA212 3
  • Do you have  RDN models on the pytorch framework?

    Do you have RDN models on the pytorch framework?

    Hi, I cannot install Torch7 because it is too old to install corresponding software package. So do you have RDN models on the pytorch framework? Or could you share some SR results produced by RDN? I just want to compare visual quality. Looking forward to your reply

    opened by Zysty 2
  • model

    model

    Hi, yulun, Thanks for your excellent work.

    I am wondering how to build the model you use? There are many files in "./models", like "RDN", "baseline". If I would like to construct the basic network you use as "Residual Dense Network", how should I follow each steps?

    Thank you. :)

    opened by bragilee 2
  • Could you please provide the result image of the degradation model?

    Could you please provide the result image of the degradation model?

    Hi, thank you so much for the source code. Besides, could you please provide the resulting image of the degradation model? I am not familiar with torch. I test the tensorflow version and find it do not reach the best performance.

    opened by wxywhu 1
  • model does not converge

    model does not converge

    hi,sorry to bother you,but I have a problem that model does not converge,what should I do?can you give me some suggestions?thank you so much.the loss function i used is l1 loss

    default

    opened by echolijinghui 1
  • super-resolution results for each dataset

    super-resolution results for each dataset

    The pytorch version of the code reproduces results that are different from those in your paper, and lua is rarely used nowadays. Can you please provide the super-resolved results of each dataset?

    opened by lala-tree 1
  • Regarding intuition compareed to MemNet

    Regarding intuition compareed to MemNet

    Hi, Really loved reading the paper!

    I just had one small confusion. What is MemNet lacking that you are trying to resolve?

    Your point from the paper: MemNet interpolates the original LR image to the desired size to form the input. This preprocessing step not only increases computation complexity quadratically but also loses some details of the original LR image.

    I have read MenNet they also do have the same feature extraction initial network as yours followed by memblocks and reconnet.

    So what does this above point mean?

    Thank you!

    opened by jaytimbadia 1
  • When I use RDN as the generator for training, the details of the generated image will appear R, G or B color spots.

    When I use RDN as the generator for training, the details of the generated image will appear R, G or B color spots.

    Hello, I have a problem. When I use RDN as the generator for training, the details of the generated image will appear R, G or B color spots. Have you encountered similar problems. BI down sampling is used for low resolution images, and this problem will not occur using residual network. 截图20210802110720

    opened by lux-hub 0
  • how many Iterations do we need,when we train this mode?

    how many Iterations do we need,when we train this mode?

    th main.lua -scale 3 -netType resnet_cu -nFeat 64 -nFeaSDB 64 -nDenseBlock 16 -nDenseConv 8 -growthRate 64 -patchSize 96 -dataset div2k -datatype t7 -DownKernel BI -splitBatch 4 -trainOnly true 我跑这个代码,但是160000次迭代也没有停下来,该怎么设置?我们需要多少迭代次数

    opened by countingstarsmer 0
  • Training

    Training

    Hi Thank you very mutch for your code. I have review the model. While, I am not familiar with torch and I get the result of 36.2648. When I read all the issues, I had found that you have skipped some batches when training. Could you tell me where the code or the regular. Waiting for you request.Thank you!

    opened by ch135 0
Owner
Yulun Zhang
Yulun Zhang
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
3D ResNets for Action Recognition (CVPR 2018)

3D ResNets for Action Recognition Update (2020/4/13) We published a paper on arXiv. Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh,

Kensho Hara 3.5k Jan 6, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 4, 2023
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Mikaela Uy 294 Dec 12, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 1, 2023
Pytorch and Torch testing code of CartoonGAN

CartoonGAN-Test-Pytorch-Torch Pytorch and Torch testing code of CartoonGAN [Chen et al., CVPR18]. With the released pretrained models by the authors,

Yijun Li 642 Dec 27, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 4, 2023
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 5, 2023
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 4, 2023
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample

Alias-Free-Torch Simple torch module implementation of Alias-Free GAN. This repository including Alias-Free GAN style lowpass sinc filter @filter.py A

이준혁(Junhyeok Lee) 64 Dec 22, 2022