PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Related tags

Deep Learning RCAN
Overview

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

This repository is for RCAN introduced in the following paper

Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu, "Image Super-Resolution Using Very Deep Residual Channel Attention Networks", ECCV 2018, [arXiv]

The code is built on EDSR (PyTorch) and tested on Ubuntu 14.04/16.04 environment (Python3.6, PyTorch_0.4.0, CUDA8.0, cuDNN5.1) with Titan X/1080Ti/Xp GPUs. RCAN model has also been merged into EDSR (PyTorch).

Visual results reproducing the PSNR/SSIM values in the paper are availble at GoogleDrive. For BI degradation model, scales=2,3,4,8: Results_ECCV2018RCAN_BIX2X3X4X8

Contents

  1. Introduction
  2. Train
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements

Introduction

Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.

CA Channel attention (CA) architecture. RCAB Residual channel attention block (RCAB) architecture. RCAN The architecture of our proposed residual channel attention network (RCAN).

Train

Prepare training data

  1. Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab.

  2. Specify '--dir_data' based on the HR and LR images path. In option.py, '--ext' is set as 'sep_reset', which first convert .png to .npy. If all the training images (.png) are converted to .npy files, then set '--ext sep' to skip converting files.

For more informaiton, please refer to EDSR(PyTorch).

Begin to train

  1. (optional) Download models for our paper and place them in '/RCAN_TrainCode/experiment/model'.

    All the models (BIX2/3/4/8, BDX3) can be downloaded from Dropbox, BaiduYun, or GoogleDrive.

  2. Cd to 'RCAN_TrainCode/code', run the following scripts to train models.

    You can use scripts in file 'TrainRCAN_scripts' to train models for our paper.

    # BI, scale 2, 3, 4, 8
    # RCAN_BIX2_G10R20P48, input=48x48, output=96x96
    python main.py --model RCAN --save RCAN_BIX2_G10R20P48 --scale 2 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 96
    
    # RCAN_BIX3_G10R20P48, input=48x48, output=144x144
    python main.py --model RCAN --save RCAN_BIX3_G10R20P48 --scale 3 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 144 --pre_train ../experiment/model/RCAN_BIX2.pt
    
    # RCAN_BIX4_G10R20P48, input=48x48, output=192x192
    python main.py --model RCAN --save RCAN_BIX4_G10R20P48 --scale 4 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 192 --pre_train ../experiment/model/RCAN_BIX2.pt
    
    # RCAN_BIX8_G10R20P48, input=48x48, output=384x384
    python main.py --model RCAN --save RCAN_BIX8_G10R20P48 --scale 8 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 384 --pre_train ../experiment/model/RCAN_BIX2.pt
    
    # RCAN_BDX3_G10R20P48, input=48x48, output=144x144
    # specify '--dir_data' to the path of BD training data
    python main.py --model RCAN --save RCAN_BIX3_G10R20P48 --scale 3 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 144 --pre_train ../experiment/model/RCAN_BIX2.pt
    

Test

Quick start

  1. Download models for our paper and place them in '/RCAN_TestCode/model'.

    All the models (BIX2/3/4/8, BDX3) can be downloaded from Dropbox, BaiduYun, or GoogleDrive.

  2. Cd to '/RCAN_TestCode/code', run the following scripts.

    You can use scripts in file 'TestRCAN_scripts' to produce results for our paper.

    # No self-ensemble: RCAN
    # BI degradation model, X2, X3, X4, X8
    # RCAN_BIX2
    python main.py --data_test MyImage --scale 2 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX2.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset Set5
    # RCAN_BIX3
    python main.py --data_test MyImage --scale 3 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX3.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset Set5
    # RCAN_BIX4
    python main.py --data_test MyImage --scale 4 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX4.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset Set5
    # RCAN_BIX8
    python main.py --data_test MyImage --scale 8 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX8.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset Set5
    # BD degradation model, X3
    # RCAN_BDX3
    python main.py --data_test MyImage --scale 3 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BDX3.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBD --degradation BD --testset Set5
    # With self-ensemble: RCAN+
    # RCANplus_BIX2
    python main.py --data_test MyImage --scale 2 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX2.pt --test_only --save_results --chop --self_ensemble --save 'RCANplus' --testpath ../LR/LRBI --testset Set5
    # RCANplus_BIX3
    python main.py --data_test MyImage --scale 3 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX3.pt --test_only --save_results --chop --self_ensemble --save 'RCANplus' --testpath ../LR/LRBI --testset Set5
    # RCANplus_BIX4
    python main.py --data_test MyImage --scale 4 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX4.pt --test_only --save_results --chop --self_ensemble --save 'RCANplus' --testpath ../LR/LRBI --testset Set5
    # RCANplus_BIX8
    python main.py --data_test MyImage --scale 8 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX8.pt --test_only --save_results --chop --self_ensemble --save 'RCANplus' --testpath ../LR/LRBI --testset Set5
    # BD degradation model, X3
    # RCANplus_BDX3
    python main.py --data_test MyImage --scale 3 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BDX3.pt --test_only --save_results --chop --self_ensemble  --save 'RCANplus' --testpath ../LR/LRBD --degradation BD --testset Set5

The whole test pipeline

  1. Prepare test data.

    Place the original test sets (e.g., Set5, other test sets are available from GoogleDrive or Baidu) in 'OriginalTestData'.

    Run 'Prepare_TestData_HR_LR.m' in Matlab to generate HR/LR images with different degradation models.

  2. Conduct image SR.

    See Quick start

  3. Evaluate the results.

    Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.

Results

Quantitative Results

PSNR_SSIM_BI PSNR_SSIM_BI PSNR_SSIM_BI Quantitative results with BI degradation model. Best and second best results are highlighted and underlined

For more results, please refer to our main papar and supplementary file.

Visual Results

Visual_PSNR_SSIM_BI Visual results with Bicubic (BI) degradation (4×) on “img 074” from Urban100

Visual_PSNR_SSIM_BI Visual_PSNR_SSIM_BI Visual_PSNR_SSIM_BI Visual_PSNR_SSIM_BI Visual comparison for 4× SR with BI model

Visual_PSNR_SSIM_BI Visual comparison for 8× SR with BI model

Visual_PSNR_SSIM_BD Visual comparison for 3× SR with BD model

Visual_Compare_GAN_PSNR_SSIM_BD Visual_Compare_GAN_PSNR_SSIM_BD Visual_Compare_GAN_PSNR_SSIM_BD Visual comparison for 4× SR with BI model on Set14 and B100 datasets. The best results are highlighted. SRResNet, SRResNet VGG22, SRGAN MSE, SR- GAN VGG22, and SRGAN VGG54 are proposed in [CVPR2017SRGAN], ENet E and ENet PAT are proposed in [ICCV2017EnhanceNet]. These comparisons mainly show the effectiveness of our proposed RCAN against GAN based methods

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

@inproceedings{zhang2018rcan,
    title={Image Super-Resolution Using Very Deep Residual Channel Attention Networks},
    author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},
    booktitle={ECCV},
    year={2018}
}

Acknowledgements

This code is built on EDSR (PyTorch). We thank the authors for sharing their codes of EDSR Torch version and PyTorch version.

Comments
  • Out of memory error

    Out of memory error

    I have tried many combinations of learning rate and decay still i am getting the same error.

    ) Preparing loss function: 1.000 * L1 [Epoch 1] Learning rate: 1.00e-6 THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory Traceback (most recent call last): File "main.py", line 19, in t.train() File "/home/administrator/Desktop/Projects/RCAN/RCAN_TrainCode/code/trainer.py", line 51, in train sr = self.model(lr, idx_scale) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/Desktop/Projects/RCAN/RCAN_TrainCode/code/model/init.py", line 54, in forward return self.model(x) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/Desktop/Projects/RCAN/RCAN_TrainCode/code/model/rcan.py", line 110, in forward res = self.body(x) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/Desktop/Projects/RCAN/RCAN_TrainCode/code/model/rcan.py", line 62, in forward res = self.body(x) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/Desktop/Projects/RCAN/RCAN_TrainCode/code/model/rcan.py", line 44, in forward res = self.body(x) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/administrator/anaconda2/envs/deeplearning/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCStorage.cu:58

    opened by arun121cs 12
  • is there a way to continue training ?

    is there a way to continue training ?

    Hi!I recently read your paper and codes, and want to ask your a question. Is there an argument to continue training after 300 epoch if I feel it's not enough ? else, how to do that ? thanks ! the --load option seems to do the job, but are the checkpoints retained ? with the correct numbering ?

    opened by DwenGu 10
  • 2 GPU error when training

    2 GPU error when training

    I run the command to use 2 GPU for trainning:

    CUDA_VISIBLE_DEVICES=0,1 python3 main.py --n_GPUs 2 --dir_data /root/dataset/super-resolution --model RCAN --save RCAN_BIX2_G10R20P48 --scale 2 --n_resgroups 10 --n_resblocks 20 --n_feats 64  --reset --chop --save_results --print_model --patch_size 96 2>&1 | tee $LOG
    

    but get follow error:

    Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/data2/docker/overlay2/l/76VGRCNKB4276UVDYJIQ4K44VI:/data2/docker/overlay2/l/MM3UKJSDI6OMZYJEQHBG5K5EBU:/data2/docker/overlay2/l/3TUQTOAGEKBLNX7DPFOKXKXUD5:/data2/docker/overlay2/l/5ZHVFRGKBYJ5MGWORLVPCB67H4:/data2/docker/overlay2/l/MGTNS2XZPIFDXQLJDPBWMZHSFF:/data2/docker/overlay2/l/NBUTJL2W2ZFDXG2JAE3Y6V4M3Z:/data2/docker/overlay2/l/WZ4AKFUGVNF4YJNSHH5XQEZVAV:/data2/docker/overlay2/l/W5VI2B4IEWSZLIUN7VC2PP3LD4:/data2/docker/overlay2/l/JBVVURDZXDPD7SAEKMXLQGX2YS:/dat'
    Unexpected end of /proc/mounts line `a2/docker/overlay2/l/2ISST5GDKCNKQHI3D6LITRSPPC:/data2/docker/overlay2/l/QA7MQGMCVTSS4DQ4SS7QOEGADY:/data2/docker/overlay2/l/24BA5LASJSQBJYYNQONNE7DFOA:/data2/docker/overlay2/l/RHLGBBVVMXFSFDL666UIIDLCU6:/data2/docker/overlay2/l/ZJYKOHO5XHWZVLIG3OOX4SMJMW:/data2/docker/overlay2/l/X3VORDWXFDU2Q4IZGWZE24GOF7,upperdir=/data2/docker/overlay2/581f3545fee5eef1ebdd17aea4f9e4d4b922a18a608972a6115f2bbeec32b019/diff,workdir=/data2/docker/overlay2/581f3545fee5eef1ebdd17aea4f9e4d4b922a18a608972a6115f2bbeec32b019/work '
    Unexpected end of /proc/mounts line `0 0
    

    Could you please help me to find the reason?

    opened by KindleHe 8
  • saving model error

    saving model error

    Hi,

    while running your code, there is a problem with the saving model. It gives an error.

    The error arises in the test section of the trainer when saving the model.

    Please, can you look into the problem?

    Regards, Saeed

    opened by saeed-anwar 7
  • train myimage

    train myimage

    Thank you for your work, I have a problem, during training my data, there always is a error "RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 96 and 68 in dimension 2 at /pytorch/aten/src/TH/generic/THTensorMath.c:3586". patch_size=96. For simplify debugging, n_resblocks and n_resgroups is decreased to 5 and 2 . Other network structure is not changed.

    The HR data is from Middlebury dataset, LR data is bicubic-sampled. (total 60 images) This problem puzzled me for a long time. I would appreciate it if you could give me a reply.

    Thanks.

    <Making model... RCAN( (sub_mean): MeanShift(3, 3, kernel_size=(1, 1), stride=(1, 1)) (add_mean): MeanShift(3, 3, kernel_size=(1, 1), stride=(1, 1)) (head): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (body): Sequential( (0): ResidualGroup( (body): Sequential( (0): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (1): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (2): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (3): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (4): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (1): ResidualGroup( (body): Sequential( (0): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (1): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (2): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (3): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (4): RCAB( (body): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): CALayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_du): Sequential( (0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU(inplace) (2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1)) (3): Sigmoid() ) ) ) ) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (tail): Sequential( (0): Upsampler( (0): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): PixelShuffle(upscale_factor=2) ) (1): Conv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) Preparing loss function: 1.000 * L1 [Epoch 1] Learning rate: 1.00e-4 [16/1980] [L1: 56.1840] 0.1+1.1s [32/1980] [L1: 58.0140] 0.1+0.1s [48/1980] [L1: 58.6069] 0.1+0.1s [64/1980] [L1: 57.1261] 0.1+0.5s [80/1980] [L1: 55.2015] 0.1+0.1s Traceback (most recent call last): File "main.py", line 20, in t.train() File "/media/ybl/0A9AD66165F33762/CODE/RCAN-master/RCAN_TrainCode/code/trainer.py", line 47, in train for batch, (lr, hr, _, idx_scale) in enumerate(self.loader_train): File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 286, in next return self._process_next_batch(batch) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 307, in _process_next_batch raise batch.exc_type(batch.exc_msg) RuntimeError: Traceback (most recent call last): File "/media/ybl/0A9AD66165F33762/CODE/RCAN-master/RCAN_TrainCode/code/dataloader.py", line 47, in _ms_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 138, in default_collate return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 138, in return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 115, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 96 and 68 in dimension 2 at /pytorch/aten/src/TH/generic/THTensorMath. c:3586>file:///home/ybl/%E5%9B%BE%E7%89%87/2019-01-09%2023-10-59%E5%B1%8F%E5%B9%95%E6%88%AA%E5%9B%BE.png

    opened by YangBolan0201 5
  • about crop the dataset, mean is different

    about crop the dataset, mean is different

    hello, I cut the images into 48*48 patches, but I found out that the mean is 10^-2 order. The dataset I made is not the same with you. I can't find the details of the input processing, can you point out a way? Thanks very much.

    opened by luxuriance19 4
  • PNSR not improving by using pre-trained models

    PNSR not improving by using pre-trained models

    The output i am getting after running the test scripts is this (deeplearning) administrator@administrator-System-Product-Name:~/Desktop/Projects/RCAN/RCAN_TestCode/code$ python main.py --data_test MyImage --scale 3 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX3.pt --test_only --save_results --chop --save 'RCAN' --testpath /home/administrator/Desktop/Projects/RCAN/RCAN_TestCode/LR/LRBI --degradation BD --testset Set5 Making model... Use DIV2K mean (0.4488, 0.4371, 0.4040) Loading model from ../model/RCAN_BIX3.pt

    Evaluation: 100%|█████████████████████████████████████████████| 5/5 [00:02<00:00, 1.72it/s] [MyImage x3] PSNR: 0.000 (Best: 0.000 @epoch 1) Total time: 2.90s, ave time: 0.58s

    I have downloaded the pre-trained models. I have used Set5 data which is present in LR folder.

    opened by arun121cs 4
  • 测试的时候提示被除数为0

    测试的时候提示被除数为0

    (pytorch) D:\RCAN-master\RCAN_TestCode\code>python main.py --data_test MyImage --scale 2 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX2.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset Set5 Making model... Use DIV2K mean (0.4488, 0.4371, 0.4040) Loading model from ../model/RCAN_BIX2.pt

    Evaluation: 0it [00:00, ?it/s] Traceback (most recent call last): File "main.py", line 18, in while not t.terminate(): File "D:\RCAN-master\RCAN_TestCode\code\trainer.py", line 139, in terminate self.test() File "D:\RCAN-master\RCAN_TestCode\code\trainer.py", line 111, in test self.ckp.log[-1, idx_scale] = eval_acc / len(self.loader_test) ZeroDivisionError: division by zero

    请问这是什么原因,。。。

    opened by zsommer 3
  • Train cannot begin?

    Train cannot begin?

    Why?

    parser.add_argument('--n_train', type=int, default=15000,#800 help='number of training set') parser.add_argument('--n_val', type=int, default=10, help='number of validation set') parser.add_argument('--offset_val', type=int, default=15000,#800

    [Epoch 15]	Learning rate: 1.00e-4
    
      0%|                                                    | 0/10 [00:00<?, ?it/s]
    Evaluation:
    100%|███████████████████████████████████████████| 10/10 [00:42<00:00,  4.16s/it][Set5 x4]	PSNR: 37.509 (Best: 37.509 @epoch 1)
    Total time: 42.01s
    
    [Epoch 16]	Learning rate: 1.00e-4
    
      0%|                                                    | 0/10 [00:00<?, ?it/s]
    Evaluation:
    100%|███████████████████████████████████████████| 10/10 [00:41<00:00,  4.16s/it][Set5 x4]	PSNR: 37.509 (Best: 37.509 @epoch 1)
    Total time: 41.92s
    
    opened by wsqat 3
  • Bump pillow from 7.2.0 to 8.3.2

    Bump pillow from 7.2.0 to 8.3.2

    Bumps pillow from 7.2.0 to 8.3.2.

    Release notes

    Sourced from pillow's releases.

    8.3.2

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.2.html

    Security

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    Python 3.10 wheels

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    Fixed regressions

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.1.html

    Changes

    8.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.3.2 (2021-09-02)

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1 (2021-07-06)

    • Catch OSError when checking if fp is sys.stdout #5585 [radarhere]

    • Handle removing orientation from alternate types of EXIF data #5584 [radarhere]

    • Make Image.array take optional dtype argument #5572 [t-vi, radarhere]

    8.3.0 (2021-07-01)

    • Use snprintf instead of sprintf. CVE-2021-34552 #5567 [radarhere]

    • Limit TIFF strip size when saving with LibTIFF #5514 [kmilos]

    • Allow ICNS save on all operating systems #4526 [baletu, radarhere, newpanjing, hugovk]

    • De-zigzag JPEG's DQT when loading; deprecate convert_dict_qtables #4989 [gofr, radarhere]

    • Replaced xml.etree.ElementTree #5565 [radarhere]

    ... (truncated)

    Commits
    • 8013f13 8.3.2 version bump
    • 23c7ca8 Update CHANGES.rst
    • 8450366 Update release notes
    • a0afe89 Update test case
    • 9e08eb8 Raise ValueError if color specifier is too long
    • bd5cf7d FLI tests for Oss-fuzz crash.
    • 94a0cf1 Fix 6-byte OOB read in FliDecode
    • cece64f Add 8.3.2 (2021-09-02) [CI skip]
    • e422386 Add release notes for Pillow 8.3.2
    • 08dcbb8 Pillow 8.3.2 supports Python 3.10 [ci skip]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • download model

    download model

    Hi, I tried downloading all the models from the below Dropbox link, but it didn't work. It said, 'there is no file exist.' "All the models (BIX2/3/4/8, BDX3) can be downloaded from Dropbox and BaiduYun."

    Where can I get the whole model?

    opened by bberry25 2
  • Bump pillow from 7.2.0 to 9.3.0

    Bump pillow from 7.2.0 to 9.3.0

    Bumps pillow from 7.2.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • The output value explode

    The output value explode

    Hello, I reproduced your paper myself, but I found that the value after the RCAB layer will become larger and larger until it explodes. Have you encountered this problem and how to solve i

    opened by zh-hike 0
  • Bump numpy from 1.18.5 to 1.22.0

    Bump numpy from 1.18.5 to 1.22.0

    Bumps numpy from 1.18.5 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Understanding the whole process

    Understanding the whole process

    Hello, First of all, congrats for your interesting work. I am just trying to figure out how is the entire process for using RCAN. I have a small dataset for training with low resolution images (128 x 128). I want to use RCAN to increase the resolution of the images to 1024 x 1024. Hence, this is what I could understand:

    For training:

    1. Place the original training set in 'OriginalTestData'.
    2. Run 'Prepare_TestData_HR_LR.m' in Matlab to generate HR/LR images with different degradation models.
    3. Run (input=128x128, output=1024x1024) python main.py --model RCAN --save my_name --scale 8 --n_resgroups 10 --n_resblocks 20 --n_feats 64 --reset --chop --save_results --print_model --patch_size 1024 --pre_train ../experiment/model/RCAN_BIX8.pt

    For inference/testing (to generate the high resolution images)

    Steps 1 and 2 before Plus the step below:

    python main.py --data_test MyImage --scale 8 --model RCAN --n_resgroups 10 --n_resblocks 20 --n_feats 64 --pre_train ../model/RCAN_BIX8.pt --test_only --save_results --chop --save 'RCAN' --testpath ../LR/LRBI --testset my_test_set

    Moreover, I understood that I must split my original training dataset into two: one for training/validation and another for testing. Is it?

    Thank you.

    opened by vsantjr 0
Owner
Yulun Zhang
Yulun Zhang
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
Receptive Field Block Net for Accurate and Fast Object Detection, ECCV 2018

Receptive Field Block Net for Accurate and Fast Object Detection By Songtao Liu, Di Huang, Yunhong Wang Updatas (2021/07/23): YOLOX is here!, stronger

Liu Songtao 1.4k Dec 21, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "

Foley Music: Learning to Generate Music from Videos This repo holds the code for the framework presented on ECCV 2020. Foley Music: Learning to Genera

Chuang Gan 30 Nov 3, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Sukrut Rao 32 Dec 13, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Contact and Human Dynamics from Monocular Video This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guib

Davis Rempe 207 Jan 5, 2023
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 1, 2023
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
SNE-RoadSeg in PyTorch, ECCV 2020

SNE-RoadSeg Introduction This is the official PyTorch implementation of SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentati

null 242 Dec 20, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

Benedek Rozemberczki 259 Dec 5, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022