subpixel: A subpixel convnet for super resolution with Tensorflow

Overview

subpixel: A subpixel convolutional neural network implementation with Tensorflow

Left: input images / Right: output images with 4x super-resolution after 6 epochs:

See more examples inside the images folder.

In CVPR 2016 Shi et. al. from Twitter VX (previously Magic Pony) published a paper called Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network [1]. Here we propose a reimplementation of their method and discuss future applications of the technology.

But first let us discuss some background.

Convolutions, transposed convolutions and subpixel convolutions

Convolutional neural networks (CNN) are now standard neural network layers for computer vision. Transposed convolutions (sometimes referred to as deconvolution) are the GRADIENTS of a convolutional layer. Transposed convolutions were, as far as we know first used by Zeiler and Fergus [2] for visualization purposes while improving their AlexNet model.

For visualization purposes let us check out that convolutions in the present subject are a sequence of inner product of a given filter (or kernel) with pieces of a larger image. This operation is highly parallelizable, since the kernel is the same throughout the image. People used to refer to convolutions as locally connected layers with shared parameters. Checkout the figure bellow by Dumoulin and Visin [3]:

source

Note though that convolutional neural networks can be defined with strides or we can follow the convolution with maxpooling to downsample the input image. The equivalent backward operation of a convolution with strides, in other words its gradient, is an upsampling operation, where zeros a filled in between non-zeros pixels followed by a convolution with the kernel rotated 180 degrees. See representation copied from Dumoulin and Visin again:

source

For classification purposes, all that we need is the feedforward pass of a convolutional neural network to extract features at different scales. But for applications such as image super resolution and autoencoders, both downsampling and upsampling operations are necessary in a feedforward pass. The community took inspiration on how the gradients are implemented in CNNs and applied them as a feedforward layer instead.

But as one may have observed the upsampling operation as implemented above with strided convolution gradients adds zero values to the upscale the image, that have to be later filled in with meaningful values. Maybe even worse, these zero values have no gradient information that can be backpropagated through.

To cope with that problem, Shi et. al [1] proposed what we argue to be one the most useful recent convnet tricks (at least in my opinion as a generative model researcher!) They proposed a subpixel convolutional neural network layer for upscaling. This layer essentially uses regular convolutional layers followed by a specific type of image reshaping called a phase shift. In other words, instead of putting zeros in between pixels and having to do extra computation, they calculate more convolutions in lower resolution and resize the resulting map into an upscaled image. This way, no meaningless zeros are necessary. Checkout the figure below from their paper. Follow the colors to have an intuition about how they do the image resizing. Check this paper for further understanding.

source

Next we will discuss our implementation of this method and later what we foresee to be the implications of it everywhere where upscaling in convolutional neural networks was necessary.

Subpixel CNN layer

Following Shi et. al. the equation for implementing the phase shift for CNNs is:

source

In numpy, we can write this as

def PS(I, r):
  assert len(I.shape) == 3
  assert r>0
  r = int(r)
  O = np.zeros((I.shape[0]*r, I.shape[1]*r, I.shape[2]/(r*2)))
  for x in range(O.shape[0]):
    for y in range(O.shape[1]):
      for c in range(O.shape[2]):
        c += 1
        a = np.floor(x/r).astype("int")
        b = np.floor(y/r).astype("int")
        d = c*r*(y%r) + c*(x%r)
        print a, b, d
        O[x, y, c-1] = I[a, b, d]
  return O

To implement this in Tensorflow we would have to create a custom operator and its equivalent gradient. But after staring for a few minutes in the image depiction of the resulting operation we noticed how to write that using just regular reshape, split and concatenate operations. To understand that note that phase shift simply goes through different channels of the output convolutional map and builds up neighborhoods of r x r pixels. And we can do the same with a few lines of Tensorflow code as:

def _phase_shift(I, r):
    # Helper function with main phase shift operation
    bsize, a, b, c = I.get_shape().as_list()
    X = tf.reshape(I, (bsize, a, b, r, r))
    X = tf.transpose(X, (0, 1, 2, 4, 3))  # bsize, a, b, 1, 1
    X = tf.split(1, a, X)  # a, [bsize, b, r, r]
    X = tf.concat(2, [tf.squeeze(x) for x in X])  # bsize, b, a*r, r
    X = tf.split(1, b, X)  # b, [bsize, a*r, r]
    X = tf.concat(2, [tf.squeeze(x) for x in X])  #
    bsize, a*r, b*r
    return tf.reshape(X, (bsize, a*r, b*r, 1))

def PS(X, r, color=False):
  # Main OP that you can arbitrarily use in you tensorflow code
  if color:
    Xc = tf.split(3, 3, X)
    X = tf.concat(3, [_phase_shift(x, r) for x in Xc])
  else:
    X = _phase_shift(X, r)
  return X

The reminder of this library is an implementation of a subpixel CNN using the proposed PS implementation for super resolution of celeb-A image faces. The code was written on top of carpedm20/DCGAN-tensorflow, as so, follow the same instructions to use it:

$ python download.py --dataset celebA  # if this doesn't work, you will have to download the dataset by hand somewhere else
$ python main.py --dataset celebA --is_train True --is_crop True

Subpixel CNN future is bright

Here we want to forecast that subpixel CNNs are going to ultimately replace transposed convolutions (deconv, conv grad, or whatever you call it) in feedforward neural networks. Phase shift's gradient is much more meaningful and resizing operations are virtually free computationally. Our implementation is a high level one, using default Tensorflow OPs. But next we will rewrite everything with Keras so that an even larger community can use it. Plus, a cuda backend level implementation would be even more appreciated.

But for now we want to encourage the community to experiment replacing deconv layers with subpixel operatinos everywhere. By everywhere we mean:

  • Conv-deconv autoencoders
    Similar to super-resolution, include subpixel in other autoencoder implementations, replace deconv layers
  • Style transfer networks
    This didn't work in a lazy plug and play in our experiments. We have to look more carefully
  • Deep Convolutional Autoencoders (DCGAN)
    We started doing this, but as predicted we have to change hyperparameters. The network power is totally different from deconv layers.
  • Segmentation Networks (SegNets)
    ULTRA LOW hanging fruit! This one will be the easiest. Free paper, you're welcome!
  • wherever upscaling is done with zero padding

Join us in the revolution to get rid of meaningless zeros in feedfoward convnets, give suggestions here, try our code!

Sample results

The top row is the input, the middle row is the output, and the bottom row is the ground truth.

by @dribnet

References

[1] Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. By Shi et. al.
[2] Visualizing and Understanding Convolutional Networks. By Zeiler and Fergus.
[3] A guide to convolution arithmetic for deep learning. By Dumoulin and Visin.

Further reading

Alex J. Champandard made a really interesting analysis of this topic in this thread.
For discussions about differences between phase shift and straight up resize please see the companion notebook and this thread.

Comments
  • Tensorflow error on main.py

    Tensorflow error on main.py

    Running OSX, I get this error on any python file I try to run

    {'batch_size': 64,
     'beta1': 0.5,
     'checkpoint_dir': 'checkpoint',
     'dataset': 'celebA',
     'epoch': 25,
     'image_size': 128,
     'is_crop': True,
     'is_train': False,
     'learning_rate': 0.0002,
     'sample_dir': 'samples',
     'train_size': inf,
     'visualize': False}
    Traceback (most recent call last):
      File "main.py", line 58, in <module>
        tf.app.run()
      File "/Library/Python/2.7/site-packages/tensorflow/python/platform/default/_app.py", line 30, in run
        sys.exit(main(sys.argv))
      File "main.py", line 39, in main
        dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
      File "/Users/fraserhemp/Documents/subpixel/model.py", line 58, in __init__
        self.build_model()
      File "/Users/fraserhemp/Documents/subpixel/model.py", line 70, in build_model
        self.G = self.generator(self.inputs)
      File "/Users/fraserhemp/Documents/subpixel/model.py", line 155, in generator
        h2 = PS(h2, 4, color=True)
      File "/Users/fraserhemp/Documents/subpixel/subpixel.py", line 21, in PS
        X = tf.concat(3, [_phase_shift(x, r) for x in Xc])
      File "/Users/fraserhemp/Documents/subpixel/subpixel.py", line 9, in _phase_shift
        X = tf.reshape(I, (bsize, a, b, r, r))
      File "/Library/Python/2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 682, in reshape
        name=name)
      File "/Library/Python/2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 411, in apply_op
        as_ref=input_arg.is_ref)
      File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 529, in convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "/Library/Python/2.7/site-packages/tensorflow/python/ops/constant_op.py", line 178, in _constant_tensor_conversion_function
        return constant(v, dtype=dtype, name=name)
      File "/Library/Python/2.7/site-packages/tensorflow/python/ops/constant_op.py", line 161, in constant
        tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
      File "/Library/Python/2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 319, in make_tensor_proto
        _AssertCompatible(values, dtype)
      File "/Library/Python/2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 259, in _AssertCompatible
        (dtype.name, repr(mismatch), type(mismatch).__name__))
    TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
    
    opened by fraser-hemp 5
  • Why do we need to transpose in phase_shift

    Why do we need to transpose in phase_shift

    I'm trying to understand the phase_shift code. But the following step doesn't make much sense to me. X = tf.transpose(X, (0, 1, 2, 4, 3)) https://github.com/Tetrachrome/subpixel/blob/master/subpixel.py#L10

    Why do we need this transpose here?

    opened by yifita 4
  • TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.

    TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.

    Traceback (most recent call last): File "main.py", line 58, in tf.app.run() File "/home/yashiro32/virtualenvironment/neural_style_transfer/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "main.py", line 39, in main dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir) File "/home/yashiro32/virtualenvironment/neural_style_transfer/projects/subpixel/model.py", line 58, in init self.build_model() File "/home/yashiro32/virtualenvironment/neural_style_transfer/projects/subpixel/model.py", line 75, in build_model self.G = self.generator(self.inputs) File "/home/yashiro32/virtualenvironment/neural_style_transfer/projects/subpixel/model.py", line 167, in generator h2 = PS(h2, 4, color=True) File "/home/yashiro32/virtualenvironment/neural_style_transfer/projects/subpixel/subpixel.py", line 20, in PS Xc = tf.split(3, 3, X) File "/home/yashiro32/virtualenvironment/neural_style_transfer/local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1214, in split split_dim=axis, num_split=num_or_size_splits, value=value, name=name) File "/home/yashiro32/virtualenvironment/neural_style_transfer/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3261, in _split num_split=num_split, name=name) File "/home/yashiro32/virtualenvironment/neural_style_transfer/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 513, in apply_op (prefix, dtypes.as_dtype(input_arg.type).name)) TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.

    opened by yashiro32 3
  • implement keras layer for subpixel convolution

    implement keras layer for subpixel convolution

    The Subpixel class defined here is a child class to Conv2D, and upscales the original output of Conv2D by an factor of r, taken in as an argument. Written for Keras 2.0.2

    opened by kyleyee23 3
  • image size problem

    image size problem

    @dribnet The last commit where things work for me is 0bee08befd160d370e50bf2f9827aeca07c432a2 After that I'm getting the following error:

    I tensorflow/core/common_runtime/gpu/gpu_device.cc:839] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 680, pci bus id: 0000:04:00.0)
    Traceback (most recent call last):
      File "main.py", line 58, in <module>
        tf.app.run()
      File "/home/eder/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
        sys.exit(main(sys.argv))
      File "main.py", line 42, in main
        dcgan.train(FLAGS)
      File "/home/eder/python/ponynet/model.py", line 108, in train
        save_images(sample_images, [8, 8], './samples/reference.png')
      File "/home/eder/python/ponynet/utils.py", line 21, in save_images
        return imsave(inverse_transform(images), size, image_path)
      File "/home/eder/python/ponynet/utils.py", line 40, in imsave
        return scipy.misc.imsave(path, merge(images, size))
      File "/home/eder/python/ponynet/utils.py", line 30, in merge
        h, w = images.shape[1], images.shape[2]
    IndexError: tuple index out of range
    

    I checked that images is an empty tensor. Do you know where that might have been introduced?

    opened by EderSantana 3
  • updates to support batch_size != 64

    updates to support batch_size != 64

    The code did not support sample_size != batch_size, so dropped sample_size as a parameter to the model constructor.

    To support this, save_images was updated to clip the number of images saved at rows * cols.

    In addition the validation inputs are now also saved at their native size. This file is called inputs_small.png.

    opened by dribnet 3
  • Training example does not run

    Training example does not run

    The README.md says to run python main.py --dataset celebA --is_train True --is_crop True. But when I do this it crashes with the error:

    Traceback (most recent call last):
      File "main.py", line 58, in <module>
        tf.app.run()
      File "/usr/local/anaconda2/envs/subpixel/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
        sys.exit(main(sys.argv[:1] + flags_passthrough))
      File "main.py", line 39, in main
        dataset_name=FLAGS.dataset, is_crop=FLAGS.is_crop, checkpoint_dir=FLAGS.checkpoint_dir)
      File "/develop/nets/subpixel/model.py", line 58, in __init__
        self.build_model()
      File "/develop/nets/subpixel/model.py", line 64, in build_model
        self.up_inputs = tf.image.resize_images(self.inputs, self.image_shape[0], self.image_shape[1], tf.image.ResizeMethod.NEAREST_NEIGHBOR)
      File "/usr/local/anaconda2/envs/subpixel/lib/python2.7/site-packages/tensorflow/python/ops/image_ops.py", line 787, in resize_images
        raise ValueError('\'size\' must be a 1-D Tensor of 2 elements: '
    ValueError: 'size' must be a 1-D Tensor of 2 elements: new_height, new_width
    

    Running the same example from the carpedm20/DCGAN-tensorflow repo works fine for me.

    opened by dribnet 3
  • using deconv2d instead of PS

    using deconv2d instead of PS

    Hi, I am trying to compare the performance of deconvolution and subpixel convolution. I change the generator by following: ` def generator(self, z):

        self.h0, self.h0_w, self.h0_b = deconv2d(z, [self.batch_size, 32, 32, self.gf_dim], k_h=1, k_w=1, d_h=1, d_w=1, name='g_h0', with_w=True)
        h0 = lrelu(self.h0)
        self.h1, self.h1_w, self.h1_b = deconv2d(h0, [self.batch_size, 32, 32, self.gf_dim], name='g_h1', d_h=1, d_w=1, with_w=True)
        h1 = lrelu(self.h1)
        h2, self.h2_w, self.h2_b = deconv2d(h1, [self.batch_size, 128, 128, 3], d_h=1, d_w=1, name='g_h2', with_w=True)
        return tf.nn.tanh(h2)`
    

    But it doesn't works: Traceback (most recent call last): File "main.py", line 58, in <module> tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "main.py", line 42, in main dcgan.train(FLAGS) File "/home/zehaohuang/subpixel_sr/model.py", line 95, in train .minimize(self.g_loss, var_list=self.g_vars) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 196, in minimize grad_loss=grad_loss) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 253, in compute_gradients colocate_gradients_with_ops=colocate_gradients_with_ops) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients.py", line 491, in gradients in_grad.set_shape(t_in.get_shape()) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 408, in set_shape self._shape = self._shape.merge_with(shape) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 583, in merge_with (self, other)) ValueError: Shapes (64, 128, 128, 64) and (64, 32, 32, 64) are not compatible Is there something wrong in my changing?

    BTW, h2 = tf.depth_to_space(h2, 4) works well, 'PS' function can be replaced by 'tf.depth_to_space'!

    opened by huangzehao 2
  • g_loss is nan for gpu

    g_loss is nan for gpu

    The logs seem to show that the g_loss value is always nan for gpu, but strangely works fine on CPU. Running with CUDA 7.5, CudNN v4, GTX 1070 Tensorflow 0.10.0rc0.

    opened by lingz 1
  • _phase_shift(I, r) error while using batch size 1

    _phase_shift(I, r) error while using batch size 1

    I think there might be a problem with function _phase_shift(I,r). My code was crashing when image was resized from 1x1 to 2x2 . Probably it is because of some kind of dimension mismatch. I changed it to :

    def _phase_shift(I, r): # Helper function with main phase shift operation bsize, a, b, c = I.get_shape().as_list() X = tf.reshape(I, (bsize, a, b, r, r)) X = tf.transpose(X, (0, 1, 2, 4, 3)) # bsize, a, b, 1, 1 X = tf.split(1, a, X) # a, [bsize, b, r, r] X = tf.concat(2, [tf.squeeze(x, squeeze_dims=1) for x in X]) # bsize, b, ar, r X = tf.split(1, b, X) # b, [bsize, a*r, r] X = tf.concat(2, [tf.squeeze(x, squeeze_dims=1) for x in X]) # bsize, ar, br return tf.reshape(X, (bsize, ar, b*r, 1))

    Its just adding squeeze_dims=1 . Now it seems to work with batch size = 1

    opened by PatrykChrabaszcz 1
  • _phase_shift does not generalize to batchsize 1

    _phase_shift does not generalize to batchsize 1

    The following lines line 12 X = tf.concat(2, [tf.squeeze(x) for x in X]) # bsize, b, a*r, r line 14 X = tf.concat(2, [tf.squeeze(x) for x in X]) # bsize, a*r, b*r

    in subpixel.py cause the first dimension to be dropped when the batch size is one. In fact, line 12 causes the dimension drop and line 14 throws an error.

    I propose the following change: X = tf.concat(2, [tf.squeeze(x, axis = 1) for x in X]) # bsize, b, a*r, r X = tf.concat(2, [tf.squeeze(x, axis = 1) for x in X]) # bsize, a*r, b*r

    opened by imayukh 1
  • the first paper proposed subpixel convolution?

    the first paper proposed subpixel convolution?

    Hello, thanks for sharing the work. I want to know the first paper that proposed subpixel convolution, not efficient subpixel convolution that you discussed in the work. Please answer me.

    opened by xjhcassy 0
  • Keras SubPixel 3D

    Keras SubPixel 3D

    Hi, Thank you for the Subpixel 2D implementation in Keras, I am able to run the 2D implementation. I am currently working on a 3D Convolution and trying to implement Subpixel for 3D convolution. I was wondering if there was an implementation for 3D Convolution and / or is there any specific parameters that I need to take into account when implementing Subpixel 3D

    opened by Sahilnalawade 0
Owner
Atrium LTS
Atrium LTS
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 504 Jan 2, 2023
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

null 83 Jan 1, 2023
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 5, 2023
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

Longguang Wang 318 Dec 24, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
Evaluation Pipeline for our ECCV2020: Journey Towards Tiny Perceptual Super-Resolution.

Journey Towards Tiny Perceptual Super-Resolution Test code for our ECCV2020 paper: https://arxiv.org/abs/2007.04356 Our x4 upscaling pre-trained model

Royson 6 Mar 30, 2022
Learning To Have An Ear For Face Super-Resolution

Learning To Have An Ear For Face Super-Resolution [Project Page] This repository contains demo code of our CVPR2020 paper. Training and evaluation on

null 50 Nov 16, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

null 11 May 19, 2022
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

null 165 Dec 17, 2022
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the official implementation ESPCN and TecoGAN for more information.

null 789 Jan 4, 2023