Skip to content

dvlab-research/MASA-SR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MASA-SR

Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution

Dependencies

  • python >= 3.5
  • pytorch >= 1.1.0
  • torchvision >= 0.4.0

Prepare Dataset

  1. Download CUFED train set and CUFED test set
  2. Place the datasets in this structure:
    CUFED
    ├── train
    │   ├── input
    │   └── ref 
    └── test
        └── CUFED5  
    

Get Started

  1. Clone this repo
    git clone https://github.com/Jia-Research-Lab/MASA-SR.git
    cd MASA-SR
    
  2. Download the dataset. Modify the argument --data_root in test.py and train.py according to your data path.

Evaluation

  1. Download the pre-trained models and place them into the pretrained_weights/ folder

    • Pre-trained models can be downloaded from Google Drive
      • masa_rec.pth: trained with only reconstruction loss
      • masa.pth: trained with all losses
  2. Run test.sh. See more details in test.sh (if you are using cpu, please add --gpu_ids -1 in the command)

    sh test.sh
    
  3. The testing results are in the test_results/ folder

Training

  1. First train masa-rec only with the reconstruction loss.
    python train.py --use_tb_logger --data_augmentation --max_iter 250 --loss_l1 --name train_masa_rec
    
  2. After getting masa-rec, train masa with all losses, which is based on the pretrained masa-rec.
    python train.py --use_tb_logger --max_iter 50 --loss_l1 --loss_adv --loss_perceptual --name train_masa_gan --resume ./weights/train_masa_rec/snapshot/net_best.pth --resume_optim ./weights/train_masa_rec/snapshot/optimizer_G_best.pth --resume_scheduler ./weights/train_masa_rec/snapshot/scheduler_best.pth
    
  3. The training results are in the weights/ folder

Update

[2021/06/08] Fix a bug in evaluation. Retrain the models and update the given checkpoints, whose PSNR have a slight difference with those reported in the paper (±0.03dB).

Acknowledgement

We borrow some codes from TTSR and BasicSR. We thank the authors for their great work.

Citation

Please consider citing our paper in your publications if it is useful for your research.

@inproceedings{lu2021masasr,
    title={MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution},
    author={Liying Lu, Wenbo Li, Xin Tao, Jiangbo Lu, and Jiaya Jia},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2021},
}

About

MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published