Activating More Pixels in Image Super-Resolution Transformer

Related tags

Deep Learning HAT
Overview

HAT [Paper Link]

Activating More Pixels in Image Super-Resolution Transformer

Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong

BibTeX

@article{chen2022activating,
  title={Activating More Pixels in Image Super-Resolution Transformer},
  author={Chen, Xiangyu and Wang, Xintao and Zhou, Jiantao and Dong, Chao},
  journal={arXiv preprint arXiv:2205.04437},
  year={2022}
}

Environment

Installation

pip install -r requirements.txt
python setup.py develop

How To Test

  • Refer to ./options/test for the configuration file of the model to be tested, and prepare the testing data and pretrained model.
  • The pretrained models are available at Google Drive or Baidu Netdisk (access code: qyrl).
  • Then run the follwing codes (taking HAT_SRx4_ImageNet-pretrain.pth as an example):
python hat/test.py -opt options/test/HAT_SRx4_ImageNet-pretrain.yml

The testing results will be saved in the ./results folder.

Results

The inference results on benchmark datasets are available at Google Drive or Baidu Netdisk (access code: 63p5).

This repo is still being updated. The training codes will be released soon.

Comments
  • Add Replicate demo and API

    Add Replicate demo and API

    Hey @Xiangtaokong ! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can run your model! We have added HAT_SRx4_ImageNet for SingleImageDataset for people to easily test their own input image, view it here: https://replicate.com/cjwbw/hat

    Replicate also have an API, so people can easily run your model from their code:

    import replicate
    model = replicate.models.get("cjwbw/hat")
    output = model.predict(image="...")
    

    You are more than welcome to modify the Replicate page (e.g. Example Gallery), let me know and I can transfer ownership to your account.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by chenxwh 7
  • I dont understand

    I dont understand

    Every image I've tried to upscale with HAT just seems to stretch the image to a larger, even more blurrier size. I've tried all sorts of sizes ranging from 64x64 to 1024x1024. It seems to just click and drag the image larger for me without actually enhancing anything.

    Am I doing something wrong? I'd love to be able to use this project but right now its very confusing to me. :/

    opened by dillfrescott 5
  • How to get same result on multiple runs ?

    How to get same result on multiple runs ?

    Hi, thanks for your sharing and contribution!

    I tried to reproduce same result about training loss on my custom dataset, but it didn't work.

    So, I wonder if HAT can return the exactly same result about training loss.

    Any help would be much appreciated, thanks.

    My environments

    • windows 10
    • python : 3.7.13
    • pytorch : 1.12.1+cu113
    • torchvision : 0.13.1+cu113
    • cuda : 11.3
    • cudnn : 8.4.1
    • basicsr : both 1.3.4.9 and 1.4.2 (latest version)

    Methods I tried

    • use_hflip = False
    • use_rot = False
    • use_shuffle = False
    • num_worker_per_gpu = 0
    opened by Dongwoo-Im 5
  • out of memory on testing

    out of memory on testing

    I set the batch size to 2 during training and it works fine. I am using the V100 16G GPU card just one. However, when I tried to test, it resulted in "out of memory" CUDA errors

    [ 2022-09-21 08:54:33,454 INFO: Model [HATModel] is created. 2022-09-21 08:54:33,455 INFO: Testing open... Traceback (most recent call last): File "/mnt/disk2/HAT/hat/test.py", line 11, in test_pipeline(root_path) File "/home/ubuntu/venv/lib/python3.10/site-packages/basicsr/test.py", line 40, in test_pipeline model.validation(test_loader, current_iter=opt['name'], tb_logger=None, save_img=opt['val']['save_img']) File "/home/ubuntu/venv/lib/python3.10/site-packages/basicsr/models/base_model.py", line 48, in validation self.nondist_validation(dataloader, current_iter, tb_logger, save_img) File "/home/ubuntu/venv/lib/python3.10/site-packages/basicsr/models/sr_model.py", line 157, in nondist_validation self.test() File "/mnt/disk2/HAT/hat/models/hat_model.py", line 29, in test self.output = self.net_g(img) File "/home/ubuntu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/disk2/HAT/hat/archs/hat_arch.py", line 978, in forward x = self.conv_after_body(self.forward_features(x)) + x File "/mnt/disk2/HAT/hat/archs/hat_arch.py", line 964, in forward_features x = layer(x, x_size, params) File "/home/ubuntu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/disk2/HAT/hat/archs/hat_arch.py", line 619, in forward return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size, params), x_size))) + x File "/home/ubuntu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/disk2/HAT/hat/archs/hat_arch.py", line 530, in forward x = self.overlap_attn(x, x_size, params['rpi_oca']) File "/home/ubuntu/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/disk2/HAT/hat/archs/hat_arch.py", line 425, in forward attn = attn + relative_position_bias.unsqueeze(0) RuntimeError: CUDA out of memory. Tried to allocate 3.38 GiB (GPU 0; 15.78 GiB total capacity; 6.44 GiB already allocated; 1.18 GiB free; 13.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ]

    How can I solve it?

    opened by ziippy 4
  • Why the loss does not converged?

    Why the loss does not converged?

    I'm learning with DF2K and using 4 GPUs.

    Also, I'm refer the "train_HAT_SRx4_finetune_from_ImageNet_pretrain.yml" file.

    I just change the dataroot_gt, dataroot_lq for train and val. also change the num_worker_per_gpu, batch_size_per_gpu like that

    ### num_worker_per_gpu: 6
    ### batch_size_per_gpu: 4
    num_worker_per_gpu: 3
    batch_size_per_gpu: 8
    

    But, after 80000 iter.. The l_pix does not converged. [ 2022-09-19 20:15:50,914 INFO: [train..][epoch:738, iter: 79,000, lr:(1.000e-05,)] [eta: 6 days, 12:51:28, time (data): 3.029 (0.004)] l_pix: 2.1359e-02 2022-09-19 20:15:50,915 INFO: Saving models and training states. 2022-09-19 20:21:18,215 INFO: [train..][epoch:739, iter: 79,100, lr:(1.000e-05,)] [eta: 6 days, 12:45:51, time (data): 3.218 (0.341)] l_pix: 3.3533e-02 2022-09-19 20:26:43,030 INFO: [train..][epoch:740, iter: 79,200, lr:(1.000e-05,)] [eta: 6 days, 12:40:10, time (data): 2.068 (0.031)] l_pix: 1.9019e-02 2022-09-19 20:32:16,706 INFO: [train..][epoch:741, iter: 79,300, lr:(1.000e-05,)] [eta: 6 days, 12:34:47, time (data): 3.264 (0.337)] l_pix: 1.9070e-02 2022-09-19 20:37:43,115 INFO: [train..][epoch:742, iter: 79,400, lr:(1.000e-05,)] [eta: 6 days, 12:29:08, time (data): 3.401 (0.004)] l_pix: 1.7958e-02 2022-09-19 20:42:36,323 INFO: [train..][epoch:742, iter: 79,500, lr:(1.000e-05,)] [eta: 6 days, 12:22:19, time (data): 2.954 (0.020)] l_pix: 1.5392e-02 2022-09-19 20:48:30,628 INFO: [train..][epoch:743, iter: 79,600, lr:(1.000e-05,)] [eta: 6 days, 12:17:40, time (data): 3.378 (0.003)] l_pix: 2.8961e-02 2022-09-19 20:53:45,430 INFO: [train..][epoch:744, iter: 79,700, lr:(1.000e-05,)] [eta: 6 days, 12:11:37, time (data): 3.156 (0.225)] l_pix: 3.7259e-02 2022-09-19 20:59:13,519 INFO: [train..][epoch:745, iter: 79,800, lr:(1.000e-05,)] [eta: 6 days, 12:06:02, time (data): 3.902 (0.031)] l_pix: 2.7916e-02 2022-09-19 21:04:49,328 INFO: [train..][epoch:746, iter: 79,900, lr:(1.000e-05,)] [eta: 6 days, 12:00:44, time (data): 3.374 (0.410)] l_pix: 2.1746e-02 2022-09-19 21:10:27,211 INFO: [train..][epoch:747, iter: 80,000, lr:(1.000e-05,)] [eta: 6 days, 11:55:30, time (data): 3.748 (0.094)] l_pix: 2.1582e-02 2022-09-19 21:10:27,213 INFO: Saving models and training states. 2022-09-19 21:22:35,811 INFO: Validation open # psnr: 20.3545 Best: 20.3660 @ 65000 iter # ssim: 0.4768 Best: 0.4769 @ 65000 iter

    2022-09-19 21:27:52,322 INFO: [train..][epoch:748, iter: 80,100, lr:(1.000e-05,)] [eta: 6 days, 12:15:17, time (data): 3.176 (0.366)] l_pix: 2.4691e-02 2022-09-19 21:33:13,818 INFO: [train..][epoch:749, iter: 80,200, lr:(1.000e-05,)] [eta: 6 days, 12:09:25, time (data): 3.303 (0.093)] l_pix: 2.2727e-02 2022-09-19 21:38:51,310 INFO: [train..][epoch:750, iter: 80,300, lr:(1.000e-05,)] [eta: 6 days, 12:04:08, time (data): 3.374 (0.419)] l_pix: 1.5810e-02 2022-09-19 21:44:40,636 INFO: [train..][epoch:751, iter: 80,400, lr:(1.000e-05,)] [eta: 6 days, 11:59:15, time (data): 3.433 (0.393)] l_pix: 1.9958e-02 2022-09-19 21:50:00,407 INFO: [train..][epoch:752, iter: 80,500, lr:(1.000e-05,)] [eta: 6 days, 11:53:20, time (data): 3.198 (0.192)] l_pix: 2.1157e-02 2022-09-19 21:55:30,407 INFO: [train..][epoch:753, iter: 80,600, lr:(1.000e-05,)] [eta: 6 days, 11:47:47, time (data): 3.248 (0.231)] l_pix: 2.8304e-02 2022-09-19 22:00:58,110 INFO: [train..][epoch:754, iter: 80,700, lr:(1.000e-05,)] [eta: 6 days, 11:42:09, time (data): 3.279 (0.391)] l_pix: 2.4832e-02 2022-09-19 22:06:35,306 INFO: [train..][epoch:755, iter: 80,800, lr:(1.000e-05,)] [eta: 6 days, 11:36:50, time (data): 3.326 (0.384)] l_pix: 2.9092e-02 2022-09-19 22:12:15,613 INFO: [train..][epoch:756, iter: 80,900, lr:(1.000e-05,)] [eta: 6 days, 11:31:38, time (data): 3.429 (0.409)] l_pix: 2.6695e-02 2022-09-19 22:17:41,607 INFO: [train..][epoch:757, iter: 81,000, lr:(1.000e-05,)] [eta: 6 days, 11:25:57, time (data): 3.343 (0.408)] l_pix: 3.1762e-02 2022-09-19 22:17:41,609 INFO: Saving models and training states. ]

    Do you know why the loss does not converged??

    attched file is my .yaml file

    please advise to me. train_HAT_SRx4_my_others_to_open.yml--.log

    opened by ziippy 3
  • Can you tell me how to preprocess images in HAT?

    Can you tell me how to preprocess images in HAT?

    Thank you for sharing your code.

    It appears to use BGR images during preprocessing.

    Can I know the process of preprocessing other than this? (Example. Divide by 255)

    opened by saeu5407 2
  • Questions related to pre-trained dataset ImageNet

    Questions related to pre-trained dataset ImageNet

    Thank you for your outstanding work! I have some questions about the ImageNet dataset that I would like to ask you.

    1. I find it a bit confusing that in the pre-training yml file there are only GT files about ImageNet and no LR files.
    2. There are some images with small size (e.g. less than 256x256) inside the ImageNet dataset, how do you handle for these images?
    3. By the way, can you give me an overview of the ImageNet dataset preparation? Looking forward to your reply!
    opened by GoPikachue 2
  • commented out

    commented out "Tile n/n"

    When I using the test.py then some additional information displayed like that.

    2022-09-25 20:49:05,012 INFO: Testing open... Tile 1/4 Tile 2/4 Tile 3/4 Tile 4/4 Tile 1/4 Tile 2/4 Tile 3/4 Tile 4/4 Tile 1/4 Tile 2/4 Tile 3/4 Tile 4/4 Tile 1/4 Tile 2/

    I think, commented out this "Tile n/n" log is more better.

    opened by ziippy 2
  • Training Error. Could you provide some guidance on how to fix this error?

    Training Error. Could you provide some guidance on how to fix this error?

    FileNotFoundError: [Errno 2] No such file or directory: '/qfs/projects/mage/watk681/DIV2K_train_HR/DIV2K_train_HR/002116_s044.png'

    However, DIV2K/DIV2K_train_HR/ uses 0001.png, 0002.png, ..., 0800.png. Any guidance on how to generate the compatible meta_info_DF2Ksub_GT.txt to work with DIV2K/DIV2K_train_HR/ uses 0001.png, 0002.png, ..., 0800.png.?

    opened by yazidoudou18 1
  • About position encoding and attention mask

    About position encoding and attention mask

    Hello,

    Thanks for your great work!

    What is the difference in implementing position encoding and attention mask in overlapped cross-window attention? I mean that overlapped cross-window attention is different from the vanilla one, since the window size of Q and K are different, and I think using original RPE and attention mask does not make sense.

    Could you please give me some hints? Thanks in advance.

    opened by mrluin 1
  • Grayscale dataset

    Grayscale dataset

    I'd like to ask about your great work.

    Is it possible to run it on a grayscale dataset? If so what should I change?  I changed a number of input channels but it is not working for me.

    network structuresnetwork_g: 

    type: HAT  upscale: 3  in_chans: 1

    I am looking forward to hearing back from you. Thank you,

    opened by Elwarfalli 0
  • IsADirectoryError: [Errno 21] Is a directory: '../datasets/DF2K/DF2K_HR_sub/'

    IsADirectoryError: [Errno 21] Is a directory: '../datasets/DF2K/DF2K_HR_sub/'

    2022-12-23 14:30:11,207 INFO: Use Exponential Moving Average with decay: 0.999 2022-12-23 14:30:20,212 INFO: Network [HAT] is created. 2022-12-23 14:30:20,331 INFO: Loss [L1Loss] is created. 2022-12-23 14:30:20,342 INFO: Model [HATModel] is created. 2022-12-23 14:30:28,265 INFO: Start training from epoch: 0, iter: 0 2022-12-23 14:31:45,639 INFO: [train..][epoch: 0, iter: 100, lr:(2.000e-04,)] [eta: 3 days, 21:18:13, time (data): 0.774 (0.080)] l_pix: 7.0438e-02 2022-12-23 14:32:54,092 INFO: [train..][epoch: 0, iter: 200, lr:(2.000e-04,)] [eta: 3 days, 22:09:22, time (data): 0.729 (0.041)] l_pix: 6.1880e-02 2022-12-23 14:34:01,717 INFO: [train..][epoch: 0, iter: 300, lr:(2.000e-04,)] [eta: 3 days, 22:02:51, time (data): 0.676 (0.001)] l_pix: 4.3285e-02 2022-12-23 14:35:10,241 INFO: [train..][epoch: 0, iter: 400, lr:(2.000e-04,)] [eta: 3 days, 22:17:41, time (data): 0.681 (0.001)] l_pix: 4.3658e-02 2022-12-23 14:36:19,291 INFO: [train..][epoch: 0, iter: 500, lr:(2.000e-04,)] [eta: 3 days, 22:34:53, time (data): 0.691 (0.001)] l_pix: 4.6613e-02 2022-12-23 14:37:27,054 INFO: [train..][epoch: 0, iter: 600, lr:(2.000e-04,)] [eta: 3 days, 22:28:09, time (data): 0.684 (0.001)] l_pix: 2.6919e-02 2022-12-23 14:38:35,155 INFO: [train..][epoch: 0, iter: 700, lr:(2.000e-04,)] [eta: 3 days, 22:27:02, time (data): 0.681 (0.001)] l_pix: 3.7104e-02 2022-12-23 14:39:42,829 INFO: [train..][epoch: 0, iter: 800, lr:(2.000e-04,)] [eta: 3 days, 22:21:28, time (data): 0.679 (0.001)] l_pix: 2.9578e-02 2022-12-23 14:40:52,560 INFO: [train..][epoch: 0, iter: 900, lr:(2.000e-04,)] [eta: 3 days, 22:35:53, time (data): 0.696 (0.001)] l_pix: 2.9916e-02 2022-12-23 14:42:00,809 INFO: [train..][epoch: 0, iter: 1,000, lr:(2.000e-04,)] [eta: 3 days, 22:34:53, time (data): 0.689 (0.001)] l_pix: 1.9958e-02 2022-12-23 14:43:11,217 INFO: [train..][epoch: 0, iter: 1,100, lr:(2.000e-04,)] [eta: 3 days, 22:50:09, time (data): 0.704 (0.001)] l_pix: 2.3746e-02 2022-12-23 14:44:22,437 INFO: [train..][epoch: 0, iter: 1,200, lr:(2.000e-04,)] [eta: 3 days, 23:08:18, time (data): 0.708 (0.001)] l_pix: 4.1201e-02 2022-12-23 14:45:35,395 INFO: [train..][epoch: 0, iter: 1,300, lr:(2.000e-04,)] [eta: 3 days, 23:34:35, time (data): 0.732 (0.001)] l_pix: 2.7439e-02 2022-12-23 14:46:48,474 INFO: [train..][epoch: 0, iter: 1,400, lr:(2.000e-04,)] [eta: 3 days, 23:57:40, time (data): 0.731 (0.001)] l_pix: 3.8303e-02 Traceback (most recent call last): File "train.py", line 11, in train_pipeline(root_path) File "/root/miniconda3/lib/python3.8/site-packages/basicsr/train.py", line 197, in train_pipeline train_data = prefetcher.next() File "/root/miniconda3/lib/python3.8/site-packages/basicsr/data/prefetch_dataloader.py", line 76, in next return next(self.loader) File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data return self._process_data(data) File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data data.reraise() File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) IsADirectoryError: Caught IsADirectoryError in DataLoader worker process 0. Original Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/miniconda3/lib/python3.8/site-packages/basicsr/data/paired_image_dataset.py", line 75, in getitem img_bytes = self.file_client.get(gt_path, 'gt') File "/root/miniconda3/lib/python3.8/site-packages/basicsr/utils/file_client.py", line 164, in get return self.client.get(filepath) File "/root/miniconda3/lib/python3.8/site-packages/basicsr/utils/file_client.py", line 63, in get with open(filepath, 'rb') as f: IsADirectoryError: [Errno 21] Is a directory: '../datasets/DF2K/DF2K_HR_sub/'

    Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in main() File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 255, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['/root/miniconda3/bin/python', '-u', 'train.py', '--local_rank=0', '-opt', '../options/train/train_HAT_SRx2_from_scratch.yml', '--launcher', 'pytorch']' returned non-zero exit status 1.

    opened by EchoXu98 1
  • ImageNet pre-trained HAT-L models without any fine-tuning.

    ImageNet pre-trained HAT-L models without any fine-tuning.

    Hi, thank you for your great work! I want to test the performance of ImageNet pre-trained HAT-L models without fine-tuning on DF2K. When pre-training HAT-L on the ImageNet dataset, the training time is long (about 13 days using 8 V100 GPUs). Could you please release those ImageNet pre-trained HAT-L models without any fine-tuning? Thanks for replying!

    opened by USTC-JialunPeng 2
  • Blocky output

    Blocky output

    The model seems to be outputting images with many large pixel-like squares. These are clearly visible in the enlarged image. The original image is 700x500 image.

    squares

    opened by shreykshah 4
Owner
XyChen
PhD. Student,Computer Vision
XyChen
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

null 83 Jan 1, 2023
Repository for "Exploring Sparsity in Image Super-Resolution for Efficient Inference", CVPR 2021

SMSR Reposity for "Exploring Sparsity in Image Super-Resolution for Efficient Inference" [arXiv] Highlights Locate and skip redundant computation in S

Longguang Wang 225 Dec 26, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

null 11 May 19, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 1, 2023
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

LiangWei Jiang 2.5k Jan 2, 2023
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 62 Nov 28, 2022