Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

Overview

DCSR: Dual Camera Super-Resolution

Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules

paper | project website | dataset | demo video | results on CUFED5

Introduction

We present a novel approach to reference-based super resolution (RefSR) with the focus on real-world dual-camera super resolution (DCSR).

Results

4X SR results on CUFED5 testset can be found in this link.

More 2X SR results on CameraFusion dataset can be found in our project website.

Setup

Installation

git clone https://github.com/Tengfei-Wang/DualCameraSR.git
cd DualCameraSR

Environment

The environment can be simply set up by Anaconda:

conda create -n DCSR python=3.7
conda activate DCSR
pip install -r requirements.txt

Dataset

Download our CameraFusion dataset from this link. This dataset currently consists of 143 pairs of telephoto and wide-angle images in 4K resolution captured by smartphone dual-cameras.

mkdir data
cd ./data
unzip CameraFusion.zip

Quick Start

The pretrained models have been put in ./experiments/pretrain. For quick test, run the scipts:

# For 4K test (with ground-truth High-Resolution images):
sh test.py

# For 8K test (without SRA):
sh test_8k.sh

# For 8K test (with SRA):
sh test_8k_SRA.sh

Training

To train the DCSR model on CameraFusion, run:

sh train.sh

The trained model should perform well on 4K test, but may suffer performance degradation on 8K test.

After the regular training, we can use Self-supervised Real-image Adaptation (SRA) to finetune the trained model for real-world 8K image applications:

sh train_SRA.sh

Citation

If you find this work useful for your research, please cite:

@InProceedings{wang2021DCSR,
author = {Wang, Tengfei and Xie, Jiaxin and Sun, Wenxiu and Yan, Qiong and Chen, Qifeng},
title = {Dual-Camera Super-Resolution with Aligned Attention Modules},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2021}
}

Acknowledgement

We thank the authors of EDSR, CSNLN, TTSR and style-swap for sharing their codes.

Comments
  • RuntimeError: CUDA error: device-side assert triggered

    RuntimeError: CUDA error: device-side assert triggered

    CUDA error happens occasionally during training, how can I fix it?

    here is the training log and stacktrace.

    (torch) ➜  DCSR git:(master) ✗ sh train.sh                                       
    Making model...
    Total params: 3.19M
    Preparing loss function:
    use_vgg: True
    use_vgg: True
    1.000 * L1
    0.050 * contextual_ref
    0.010 * contextual_hr
    /home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:417: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`.
      "please use `get_last_lr()`.", UserWarning)
    [Epoch 1]       Learning rate: 1.00e-4
    /home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/functional.py:3063: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
      "See the documentation of nn.Upsample for details.".format(mode))
    /home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
      warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "
    [400/39300]     [L1: 0.0565][contextual_ref: 0.1948][contextual_hr: 0.0653][Total: 0.3167]      49.5+1.9s
    [800/39300]     [L1: 0.0414][contextual_ref: 0.1722][contextual_hr: 0.0554][Total: 0.2690]      49.1+0.1s
    [1200/39300]    [L1: 0.0338][contextual_ref: 0.1618][contextual_hr: 0.0507][Total: 0.2462]      49.5+0.1s
    [1600/39300]    [L1: 0.0308][contextual_ref: 0.1564][contextual_hr: 0.0480][Total: 0.2352]      49.7+0.1s
    [2000/39300]    [L1: 0.0274][contextual_ref: 0.1526][contextual_hr: 0.0461][Total: 0.2260]      49.9+0.1s
    [2400/39300]    [L1: 0.0253][contextual_ref: 0.1497][contextual_hr: 0.0449][Total: 0.2199]      49.9+0.1s
    [2800/39300]    [L1: 0.0236][contextual_ref: 0.1473][contextual_hr: 0.0437][Total: 0.2146]      49.8+0.1s
    [3200/39300]    [L1: 0.0224][contextual_ref: 0.1453][contextual_hr: 0.0428][Total: 0.2105]      49.8+0.1s
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [32,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [33,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [34,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [35,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [36,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [37,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [38,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [39,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [40,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [41,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [42,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [43,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [44,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [45,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [47,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [50,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [53,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [56,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [57,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [59,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [61,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [62,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    /opt/conda/conda-bld/pytorch_1607370120218/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [93,0,0], thread: [63,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
    Traceback (most recent call last):
      File "main.py", line 31, in <module>
        main()
      File "main.py", line 25, in main
        trainer.train()
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/trainer.py", line 49, in train
        sr = self.model(lr, ref)
      File "/home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/__init__.py", line 100, in forward
        return self.model(x, ref,False)
      File "/home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/dcsr.py", line 117, in forward
        ref_features_aligned = self.aa3(input, ref_p, index_map, ref_features1)
      File "/home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/attention.py", line 115, in forward
        warpped_features = self.align(warpped_features,lr,warpped_ref)        
      File "/home/laizeqiang/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/alignment.py", line 55, in forward
        p = self._get_p(affine, dtype)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/alignment.py", line 124, in _get_p
        p_n = self._get_p_n(N, dtype)
      File "/media/exthdd/laizeqiang/lzq/projects/ref-sr/related_work/DCSR/model/alignment.py", line 102, in _get_p_n
        p_n = p_n.view(1, 2*N, 1, 1).type(dtype)
    RuntimeError: CUDA error: device-side assert triggered
    
    opened by Zeqiang-Lai 10
  • Question about 4X SR on CUFED5

    Question about 4X SR on CUFED5

    Hi, Thanks for your tips on 4X SR! I am sparing no effort to modify the released code for 4X SR on CUFED5 because your results on CUFED5 are quite appealing. I have a few questions to ask :)

    Datasets:

    1. For 2X SR,the scale of LR images is the same as the ref images(2016X1512). For 4x SR, you said the scale of the ref image is 4x of that of LR images. Is it only downsampling 4X input images? Did you mean that the I Ref patch is 4X of the I LR patch? In your paper, you said the resolutions of HR and Ref are about 300×500. and which downsampling method do you use to process the input liking Figure 5 in the main paper?

    2. Dose the CUFED5 datasets for training refer to the datasets with 11871 photos and testing with 126 photos?

    Modify:

    1. For 4X SR,I assume I need to add an extra reference features extracted from I Ref and Attention module and fusion module based on the released code.
    2. when modifying the VGG19 layer from 7 to 11, it has 2 MaxPool2d. Maybe it is the input that confuses me. I feel I still need little tips to modify it successfully.

    Thanks.

    opened by Brightlcz 7
  • About  4x start

    About 4x start

    I try to modify the network for 4x,but I am not capable.Error message prompt:index 【】is out of bounds for dimension 2 with size 【】. I don't know how to produce the correct result

    opened by Lkinyuu 5
  • question on dcsr.py

    question on dcsr.py

    with torch.no_grad(): if coarse:

            B = lr.shape[0]
            ref_ = F.interpolate(ref, scale_factor=1/16, mode='bicubic')
            lr_ = F.interpolate(lr, scale_factor=1/8, mode='bicubic')
    
            i , P, r= self.select(lr_, ref_)
            
            for j in range(B):
              ref_p = ref[:, :,np.maximum((2*8*(i[j]//P)).cpu(),0):np.minimum((2*8*(i[j]//P)+2*lr.shape[2]).cpu(), ref.shape[2]), np.maximum((2*8*(i[j]%P)).cpu(),0):np.minimum((2*8*(i[j]%P)+2*lr.shape[3]).cpu(), ref.shape[3])]
    
          else: 
            ref_p = ref    
    
    opened by Lkinyuu 5
  • cannot reproduce 8k result as shown in the paper

    cannot reproduce 8k result as shown in the paper

    I run the scripts provided in the repo, but cannot reproduce 8k result as shown in the paper. Besides, the color of patches of 8K SRA is inconsistent.

    sh test_8k.sh
    sh test_8k_SRA.sh
    

    result of test_8k image

    result of test_8k_SRA image

    results from the paper image

    image

    opened by Zeqiang-Lai 5
  • customized dataset

    customized dataset

    Hi, Thanks for sharing your code and the excellent ideas. I am doing a large-factor RefSR project (8x or 16x SR), and I found your results on CUFED5 and your proposed dataset are quite promising. My question is how can I modify the released code for my own datasets? Looking forward to your reply. Cheers, Junhe Zhang

    opened by Junhe10 5
  • Question on alignment.py

    Question on alignment.py

    Hi,

    I have a question on alignment.py in which affine transformation is patch-wisely applied. I am not an expert on deformable convolution, which I believe the code is based on, so please be patient with my question ;)

    1. In def _get_p_n(...), why do you subtract 0.5 and add 0.6 to the range? Is it for the even-size kernel?

    2. In def _get_p(...) Why do you add (self.kernel_size-1)//2+0.5 to affined transformed coordinates? Shouldn't it be (self.kernel_size-1)//2-0.5 for making coordinate range between 0~h-1?

    opened by codeslake 4
  • Question on network architecture

    Question on network architecture

    Hi,

    I've gone through the code and have some questions about the network architecture described in the paper and in the code.

    1. In Figure 2 in the main paper, the network has 4 Aligned Attention(aa) modules, but the code has only 3. Is there a performance decrease when you use 4 aa modules?

    2. For the aa modules defined in dcsr.py, the scale and align arguments are different based on self.flag_8k.

      • Is it appropriate to assume that the scale values are higher when self.flag_8k==True for better feature matching between features of higher resolution?
      • for an aa2 module, why does it get align==False? I can see that alignment is not necessary when scale==1, as patches become 1*1 tensors. However, for aa2 when self.flag_8k==True, why is align set to False?
    3. Would you please elaborate on the intuition of having coarse==True? I have not detailly checked the patch coordinates used for the evaluation, but I assume coarse is set to True when the LR patch is outside of FOV of ref patch. When coarse==True, the DCSR model downsamples the LR and ref images with factors of 1/16 and 1/8 respectively. Is this for roughly matching the structure as those patches might not share a common context?

    Thanks in advance,

    opened by codeslake 4
  • Sorry I'm asking stupid questions again

    Sorry I'm asking stupid questions again

    Did I need modify the code in dataset? I guess this part of the code only applies to CameraFusion. I long to know the difference in dataset.py between DCSR (CUFED5) and other Refsr method .Because I originally wanted to directly apply the dataset code of other methods, but failed. @jiaxinxie97 @Tengfei-Wang

    opened by Lkinyuu 3
  • About patch_select

    About patch_select

    Hi, @Tengfei-Wang

    First of all, I want to say thank you and ask you again.

    Because the released codes are only training the overlapped FoV on CameraFusion.

    So when training on CUFED5, I changed L_x to L_x = random.randrange(0, L_w - L_p + 1 - 15) for training the whole image.

    When testing, I directly use the same corresponding position between the reference image and input image as a patch_ref because I think input image and reference image are very similar.

    So in the second for loop, I remove those lines and select the pixel value of patch_ref[] to 4x patch_LR in the same corresponding position and set course as False.

    But the result is not good, I feel my choice of patch_ref is too simple, how can I do to achieve the result in the paper when training CUFED5?

    Looking forward to your reply!

    opened by Brightlcz 2
  • Question on SRA Loss implementation

    Question on SRA Loss implementation

    Hi,

    In Eq 7 in the main paper, images in the L1 term are not applied with Gaussian. However, in the code it seems like Gaussian is applied to images for the L1 loss no matter what. Is it intended?

    Thanks in advance!

    opened by codeslake 2
  • About CameraFusion dataset

    About CameraFusion dataset

    Hi, thank you for your code and dataset. I wonder how you collect the corresponding telephotos for wide photos? Did you manually zoom in or you have other techniques? If it is from zooming in, what is the zoom-in ratio?

    opened by notorious-eric 4
Owner
Tengfei Wang
Ph.D. candidate @ HKUST / Computer Vision
Tengfei Wang
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation

Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p

Sam 79 Nov 24, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

null 7 Feb 10, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 3, 2023
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

?? ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Zhilu Zhang 53 Dec 20, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

RawVSR This repo contains the official codes for our paper: Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference Xiaoh

Xiaohong Liu 23 Oct 8, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 3, 2023