Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

Overview

C2-Matching (CVPR2021)

Python 3.7 pytorch 1.4.0

This repository contains the implementation of the following paper:

Robust Reference-based Super-Resolution via C2-Matching
Yuming Jiang, Kelvin C.K. Chan, Xintao Wang, Chen Change Loy, Ziwei Liu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

[Paper] [Project Page] [WR-SR Dataset]

Overview

overall_structure

Dependencies and Installation

  • Python >= 3.7
  • PyTorch >= 1.4
  • CUDA 10.0 or CUDA 10.1
  • GCC 5.4.0
  1. Clone Repo

    git clone [email protected]:yumingj/C2-Matching.git
  2. Create Conda Environment

    conda create --name c2_matching python=3.7
    conda activate c2_matching
  3. Install Dependencies

    cd C2-Matching
    conda install pytorch=1.4.0 torchvision cudatoolkit=10.0 -c pytorch
    pip install mmcv==0.4.4
    pip install -r requirements.txt
  4. Install MMSR and DCNv2

    python setup.py develop
    cd mmsr/models/archs/DCNv2
    python setup.py build develop

Dataset Preparation

Please refer to Datasets.md for pre-processing and more details.

Get Started

Pretrained Models

Downloading the pretrained models from this link and put them under experiments/pretrained_models folder.

Test

We provide quick test code with the pretrained model.

  1. Modify the paths to dataset and pretrained model in the following yaml files for configuration.

    ./options/test/test_C2_matching.yml
    ./options/test/test_C2_matching_mse.yml
  2. Run test code for models trained using GAN loss.

    python mmsr/test.py -opt "options/test/test_C2_matching.yml"

    Check out the results in ./results.

  3. Run test code for models trained using only reconstruction loss.

    python mmsr/test.py -opt "options/test/test_C2_matching_mse.yml"

    Check out the results in in ./results

Train

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments and ./tb_logger directory.

  1. Modify the paths to dataset in the following yaml files for configuration.

    ./options/train/stage1_teacher_contras_network.yml
    ./options/train/stage2_student_contras_network.yml
    ./options/train/stage3_restoration_gan.yml
  2. Stage 1: Train teacher contrastive network.

    python mmsr/train.py -opt "options/train/stage1_teacher_contras_network.yml"
  3. Stage 2: Train student contrastive network.

    # add the path to *pretrain_model_teacher* in the following yaml
    # the path to *pretrain_model_teacher* is the model obtained in stage1
    ./options/train/stage2_student_contras_network.yml
    python mmsr/train.py -opt "options/train/stage2_student_contras_network.yml"
  4. Stage 3: Train restoration network.

    # add the path to *pretrain_model_feature_extractor* in the following yaml
    # the path to *pretrain_model_feature_extractor* is the model obtained in stage2
    ./options/train/stage3_restoration_gan.yml
    python mmsr/train.py -opt "options/train/stage3_restoration_gan.yml"
    
    # if you wish to train the restoration network with only mse loss
    # prepare the dataset path and pretrained model path in the following yaml
    ./options/train/stage3_restoration_mse.yml
    python mmsr/train.py -opt "options/train/stage3_restoration_mse.yml"

Visual Results

For more results on the benchmarks, you can directly download our C2-Matching results from here.

result

Webly-Reference SR Dataset

Check out our Webly-Reference (WR-SR) SR Dataset through this link! We also provide the baseline results for a quick comparison in this link.

Webly-Reference SR dataset is a test dataset for evaluating Ref-SR methods. It has the following advantages:

  • Collected in a more realistic way: Reference images are searched using Google Image.
  • More diverse than previous datasets.

result

Citaion

If you find our repo useful for your research, please consider citing our paper:

@InProceedings{jiang2021c2matching,
   author = {Yuming Jiang and Kelvin C.K. Chan and Xintao Wang and Chen Change Loy and Ziwei Liu},
   title = {Robust Reference-based Super-Resolution via C2-Matching},
   booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
   year = {2021}
}

License and Acknowledgement

This project is open sourced under MIT license. The code framework is mainly modified from BasicSR and MMSR (Now reorganized as MMEditing). Please refer to the original repo for more usage and documents.

Contact

If you have any question, please feel free to contact us via [email protected].

Comments
  • RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0.

    RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0.

    RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 332 and 375 in dimension 3 at /tmp/pip-req-build-rz55_vgo/aten/src/TH/generic/THTensor.cpp:612

    Hi,I am working on Stage 3: Train Restoration Network in Train. Run into this difficulty,this problem how to solve?

    opened by chenmz430 8
  • Failed to replicate the result reported in the paper.

    Failed to replicate the result reported in the paper.

    Thanks for sharing your great works. It is an innotative step to construct GT correspondences through perspective transformations and enable the explicit supervision. I add some codes to test the performance on Sun80, Urban100 and Manga109。There are no more changes to the released code. The first line is the result reported in the paper. The second line is tested from the released checkpoint. In the third line, we retrain the model on Stage3. CUFED has significant gain but other datasets all drop. image

    opened by wdmwhh 7
  • Restoration Module

    Restoration Module

    where is the code of super-resolution Restoration Module? Which file contains the super-resolution Restoration Module operation? Can I use another network structure for super-resolution? Looking forward to your reply

    opened by cheun726 4
  • Why resample both original and reference images?

    Why resample both original and reference images?

    I was looking through ref_cufed_dataset.py. Saw under the dataloader that the code downsamples and upsamples both original and reference images:

            # downsample image using PIL bicubic kernel
            lq_h, lq_w = gt_h // scale, gt_w // scale
            
            img_in_lq = img_in_pil.resize((lq_w, lq_h), Image.BICUBIC)
            img_ref_lq = img_ref_pil.resize((lq_w, lq_h), Image.BICUBIC)
            
            # bicubic upsample LR
            img_in_up = img_in_lq.resize((gt_w, gt_h), Image.BICUBIC)
            img_ref_up = img_ref_lq.resize((gt_w, gt_h), Image.BICUBIC)
    

    I assume we downsample and upsample the images to make them low-resolution, but why do it for the reference image as well? Shouldn't the reference image be high-resolution?

    opened by crameth 4
  • scale

    scale

    1. Can scale only be 4? Can it be adjusted to other scales? If so, what codes need to be modified and what should be paid attention to.

    2. In the test phase, can I not sample(down) the input? I adjust the relevant code, but report an error...

    I look forward to your reply!

    opened by hbw945 3
  • Question about Correspondence Network training

    Question about Correspondence Network training

    In ContrasDataset, you resize the input. (opt['gt_size'] is 160 in training config.)

            gt_h, gt_w = self.opt['gt_size'], self.opt['gt_size']
            # in case that some images may not have the same shape as gt_size
            img_in = mmcv.imresize(img_in, (gt_w, gt_h), interpolation='bicubic')
    

    However, in ContrasValDataset, there is not "resize".

            gt_h, gt_w, _ = img_in.shape
    
            H_inverse = self.transform_matrices[index]
            img_in_transformed = cv2.warpPerspective(
                src=img_in, M=H_inverse, dsize=(gt_w, gt_h))
    

    Therefore, I have the question why you don't train the correspondence net from original HR (GT).

    opened by wdmwhh 2
  • No scheduler.step() call

    No scheduler.step() call

    Hi, thanks for your interesting work. I cannot find any scheduler.step() call in your code, even though there are several scheduler setups (In training logs, lr is always constant during the training process). Is this a bug or is it intentional?

    opened by P0lyFish 2
  • Where the experiment folder has been set?

    Where the experiment folder has been set?

    Hi there,

    Thanks for your great work! Where did you set the model checkpoints are saved in the "experiments" folder? May you please direct me to that statement?

    Thanks

    opened by bia006 0
  • training  ContrasValDataset

    training ContrasValDataset

    In training stage1 and stage2, ContrasValDataset why does use low resolution images to generate ref images by resizing instead of reading ref images directly

        img_path  = self . paths [index ]['in_path' ]
        img_bytes  = self . file_client . get ( img_path ,  'in' ) 
        img_in  = mmcv . imfrombytes ( img_bytes ). astype ( np . float32 )  / 255.
    
        gt_h,  gt_w = self . opt ['gt_size' ],  self . opt ['gt_size' ]
        # in case that some images may not have the same shape as gt_size 
        img_in  = mmcv . imresize ( img_in , ( gt_w,  gt_h),  interpolation ='bicubic' ) 
    
        # augmentation: flip, rotation 
        img_in  = augment ([ img_in ],  self . opt ['use_flip' ],  self . opt ['use_rot' ]) 
    
        # image pair generation 
        img_in_transformed ,  H ,  H_inverse  = image_pair_generation ( 
            img_in , ( 0,  10),  160) 
    
     
        return  {
            'img_in':  img_in , 
            'img_in_up':  img_in_up , 
            'img_ref' :  img_in_transformed , 
            'img_ref_up' :  img_in_transformed_up , 
            'transformed_coordinate' :  transformed_coordinate 
        }
    
    opened by Winnie202 0
  • about training setting

    about training setting

    Excuse me, does the gt-size need to change according to the size of the ref image during the training? For example, if my ref image size is 720 * 1280, do I need to change the gt-size to 720? In addition, how many epochs are appropriate for stage1, stage2 and stage3 in the training

    opened by Winnie202 0
  • about input size

    about input size

    Excuse me, I want to train and test my own dataset. If the 180 * 320 super resolution to 720 * 1080, what size should the input and ref image be,thanks

    opened by Winnie202 0
  • Does it support distributed training?

    Does it support distributed training?

    I would like to ask the author if the c2_matching code supports distributed training, and if so, can you share the command,such as<python -m torch.distributed.launch --nproc_per_node=2 --launcher pytorch>,thank you!

    opened by ModestTony 1
Owner
Yuming Jiang
MMLab@NTU, Ph.D. Student
Yuming Jiang
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 6 Oct 20, 2021
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 5, 2023
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

null 106 Dec 28, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 104 Jan 5, 2023
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
Code for CVPR2021 paper "Robust Reflection Removal with Reflection-free Flash-only Cues"

Robust Reflection Removal with Reflection-free Flash-only Cues (RFC) Paper | To be released: Project Page | Video | Data Tensorflow implementation for

Chenyang LEI 162 Jan 5, 2023
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 3, 2023
Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

RawVSR This repo contains the official codes for our paper: Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference Xiaoh

Xiaohong Liu 23 Oct 8, 2022
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

LiangWei Jiang 2.5k Jan 2, 2023
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022