A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

Overview

TecoGAN-PyTorch

Introduction

This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to the official TensorFlow implementation TecoGAN-TensorFlow for more information.

Features

  • Better Performance: This repo provides model with smaller size yet better performance than the official repo. See our Benchmark on Vid4 and ToS3 datasets.
  • Multiple Degradations: This repo supports two types of degradation, i.e., BI & BD. Please refer to this wiki for more details about degradation types.
  • Unified Framework: This repo provides a unified framework for distortion-based and perception-based VSR methods.

Contents

  1. Dependencies
  2. Test
  3. Training
  4. Benchmark
  5. License & Citation
  6. Acknowledgements

Dependencies

  • Ubuntu >= 16.04
  • NVIDIA GPU + CUDA
  • Python 3
  • PyTorch >= 1.0.0
  • Python packages: numpy, matplotlib, opencv-python, pyyaml, lmdb
  • (Optional) Matlab >= R2016b

Test

Note: We apply different models according to the degradation type of the data. The following steps are for 4x upsampling in BD degradation. You can switch to BI degradation by replacing all BD to BI below.

  1. Download the official Vid4 and ToS3 datasets.
bash ./scripts/download/download_datasets.sh BD 

If the above command doesn't work, you can manually download these datasets from Google Drive, and then unzip them under ./data.

The dataset structure is shown as below.

data
  ├─ Vid4
    ├─ GT                # Ground-Truth (GT) video sequences
      └─ calendar
        ├─ 0001.png
        └─ ...
    ├─ Gaussian4xLR      # Low Resolution (LR) video sequences in BD degradation
      └─ calendar
        ├─ 0001.png
        └─ ...
    └─ Bicubic4xLR       # Low Resolution (LR) video sequences in BI degradation
      └─ calendar
        ├─ 0001.png
        └─ ...
  └─ ToS3
    ├─ GT
    ├─ Gaussian4xLR
    └─ Bicubic4xLR
  1. Download our pre-trained TecoGAN model. Note that this model is trained with lesser training data compared with the official one, since we can only retrieve 212 out of 308 videos from the official training dataset.
bash ./scripts/download/download_models.sh BD TecoGAN

Again, you can download the model from [BD degradation] or [BI degradation], and put it under ./pretrained_models.

  1. Super-resolute the LR videos with TecoGAN. The results will be saved at ./results.
bash ./test.sh BD TecoGAN
  1. Evaluate SR results using the official metrics. These codes are borrowed from TecoGAN-TensorFlow, with minor modifications to adapt to BI mode.
python ./codes/official_metrics/evaluate.py --model TecoGAN_BD_iter500000
  1. Check out model statistics (FLOPs, parameters and running speed). You can modify the last argument to specify the video size.
bash ./profile.sh BD TecoGAN 3x134x320

Training

  1. Download the official training dataset based on the instructions in TecoGAN-TensorFlow, rename to VimeoTecoGAN and then place under ./data.

  2. Generate LMDB for GT data to accelerate IO. The LR counterpart will then be generated on the fly during training.

python ./scripts/create_lmdb.py --dataset VimeoTecoGAN --data_type GT

The following shows the dataset structure after completing the above two steps.

data
  ├─ VimeoTecoGAN          # Original (raw) dataset
    ├─ scene_2000
      ├─ col_high_0000.png
      ├─ col_high_0001.png
      └─ ...
    ├─ scene_2001
      ├─ col_high_0000.png
      ├─ col_high_0001.png
      └─ ...
    └─ ...
  └─ VimeoTecoGAN.lmdb     # LMDB dataset
    ├─ data.mdb
    ├─ lock.mdb
    └─ meta_info.pkl       # each key has format: [vid]_[total_frame]x[h]x[w]_[i-th_frame]
  1. (Optional, this step is needed only for BI degradation) Manually generate the LR sequences with Matlab's imresize function, and then create LMDB for them.
# Generate the raw LR video sequences. Results will be saved at ./data/Bicubic4xLR
matlab -nodesktop -nosplash -r "cd ./scripts; generate_lr_BI"

# Create LMDB for the raw LR video sequences
python ./scripts/create_lmdb.py --dataset VimeoTecoGAN --data_type Bicubic4xLR
  1. Train a FRVSR model first. FRVSR has the same generator as TecoGAN, but without GAN training. When the training is finished, copy and rename the last checkpoint weight from ./experiments_BD/FRVSR/001/train/ckpt/G_iter400000.pth to ./pretrained_models/FRVSR_BD_iter400000.pth. This step offers a better initialization for the TecoGAN training.
bash ./train.sh BD FRVSR

You can download and use our pre-trained FRVSR model [BD degradation] [BI degradation] without training from scratch.

bash ./scripts/download/download_models.sh BD FRVSR
  1. Train a TecoGAN model. By default, the training is conducted in the background and the output info will be logged at ./experiments_BD/TecoGAN/001/train/train.log.
bash ./train.sh BD TecoGAN
  1. To monitor the training process and visualize the validation performance, run the following script.
 python ./scripts/monitor_training.py --degradation BD --model TecoGAN --dataset Vid4

Note that the validation results are NOT the same as the test results mentioned above, because we use a different implementation of the metrics. The differences are caused by croping policy, LPIPS version and some other issues.

Benchmark

[1] FLOPs & speed are computed on RGB sequence with resolution 134*320 on NVIDIA GeForce GTX 1080Ti GPU.
[2] Both FRVSR & TecoGAN use 10 residual blocks, while TecoGAN+ has 16 residual blocks.

License & Citation

If you use this code for your research, please cite the following paper.

@article{tecogan2020,
  title={Learning temporal coherence via self-supervision for GAN-based video generation},
  author={Chu, Mengyu and Xie, You and Mayer, Jonas and Leal-Taix{\'e}, Laura and Thuerey, Nils},
  journal={ACM Transactions on Graphics (TOG)},
  volume={39},
  number={4},
  pages={75--1},
  year={2020},
  publisher={ACM New York, NY, USA}
}

Acknowledgements

This code is built on TecoGAN-TensorFlow, BasicSR and LPIPS. We thank the authors for sharing their codes.

If you have any questions, feel free to email [email protected]

Comments
  • module 'metrics.LPIPS' has no attribute 'models'

    module 'metrics.LPIPS' has no attribute 'models'

    Hi,

    Nice project!

    I had to change line 15 in file ./codes/metrics/LPIPS/models/networks_basic.py from import metrics.LPIPS.models as util to import models as util to get it to run. But I am not a python guy so to speak, so perhaps I did it wrong. Hope this helps.

    opened by marchage 6
  • Did you implement Unpaired Video Translation task?

    Did you implement Unpaired Video Translation task?

    Hi, thanks for your wonderful work to reimplement the TecoGAN in Pytorch. I wonder whether you have implemented the Unpaired Video Translation task just like what the paper said?

    opened by zhanghm1995 4
  • Training plots of losses

    Training plots of losses

    Hi,

    Could you provide training plots of all losses? I've implemented my own version of TecoGAN that was influenced by this repo, but I can't figure out if I'm getting the best results as my losses don't seem to be decreasing.

    opened by bfreskura 3
  •  Is it normal that only 6gb of 12gb gpu memory is used when upscaled?

    Is it normal that only 6gb of 12gb gpu memory is used when upscaled?

    Super cool project and thanks for sharing :-)

    Is it normal that only 6gb of 12gb gpu memory is used when upscaled? And that the clock frequency fluctuates and is not going through to full load?

    In the code I tried to increase the number of workers, but there is no difference, is there another place where I can set something? And is the Multi GPU support only for training, or also for upscaling?

    I hope you find the time to answer my question, I would be very happy to click, Even if I know that you have provided everything free of charge :-)

    opened by CybotDNA 2
  • What about degradation settings of BD?

    What about degradation settings of BD?

    Hi,

    Thanks for your great reimplementation!

    I saw that degradation type indeed had big impact on the super-resolution result, especially in vid4-calendar and vid4-foliage, so what about the degradation setting of BD, the link given in readme was broken down~

    Looking forward to your reply :)

    opened by mrluin 2
  • skimage.measure.compare_ssim not present in current version of skimage

    skimage.measure.compare_ssim not present in current version of skimage

    I was having issues with Traceback (most recent call last): File "/home/alex/TecoGAN-PyTorch/./codes/main.py", line 9, in from models import define_model File "/home/alex/TecoGAN-PyTorch/codes/models/init.py", line 1, in from .vsr_model import VSRModel File "/home/alex/TecoGAN-PyTorch/codes/models/vsr_model.py", line 7, in from .networks import define_generator File "/home/alex/TecoGAN-PyTorch/codes/models/networks/init.py", line 1, in from .tecogan_nets import FRNet, SpatioTemporalDiscriminator, SpatialDiscriminator File "/home/alex/TecoGAN-PyTorch/codes/models/networks/tecogan_nets.py", line 12, in from metrics.model_summary import register, parse_model_info File "/home/alex/TecoGAN-PyTorch/codes/metrics/init.py", line 1, in from .metric_calculator import MetricCalculator File "/home/alex/TecoGAN-PyTorch/codes/metrics/metric_calculator.py", line 13, in from .LPIPS.models.dist_model import DistModel File "/home/alex/TecoGAN-PyTorch/codes/metrics/LPIPS/models/init.py", line 7, in from skimage.measure import compare_ssim ImportError: cannot import name 'compare_ssim' from 'skimage.measure' (/home/alex/anaconda3/envs/fastai/lib/python3.9/site-packages/skimage/measure/init.py)

    According to the following post, the replacing the import with structural_similarity replacement works https://stackoverflow.com/a/67966335

    opened by findalexli 1
  • X2 pre-trained

    X2 pre-trained

    hello, Thanks for your upload X2 reimplementation!

    Can you provide X2 pre-trained model? I try to train X2 and follow your training steps, ​but the result is terrible. Or, can you share how do you train X2?

    Besides, REDS and Vid4 testing datasets only have X4 LR, can you provide your implemented dataset

    THX!!!

    opened by Burgerting 1
  • How to upscale our own videos?

    How to upscale our own videos?

    Hi,

    What's the procedure to upscale our own videos? The current code seems to need ground truth results and computes metrics. What if I just want to try it on my own videos?

    Thank you

    opened by Vermeille 1
  • TrainDataset

    TrainDataset

    Due to network limitations in mainland China, we cannot connect to vimeo.com to download the training set, so please provide a link to the VimeoTecoGAN website.Thank you !!

    (open-mmlab) smartcity@smartcity-X780-G30:/media/smartcity/E6AA1145AA1113A1/CaiFeifan/TecoGAN$ python3 dataPrepare.py --start_id 2000 --duration 120 --REMOVE --disk_path /media/smartcity/E6AA1145AA1113A1/CaiFeifan/datasets/VimeoTecoGAN/ [Configurations]: start_id: 2000 duration: 120 disk_path: /media/smartcity/E6AA1145AA1113A1/CaiFeifan/datasets/VimeoTecoGAN/ summary_dir: /media/smartcity/E6AA1145AA1113A1/CaiFeifan/datasets/VimeoTecoGAN/log/ REMOVE: True TEST: False End of configuration Try loading 308x120. https://vimeo.com/121649159 [vimeo] 121649159: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/121649159 Skipped invalid link or other error:https://vimeo.com/121649159 https://vimeo.com/40439273 [vimeo] 40439273: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/40439273 Skipped invalid link or other error:https://vimeo.com/40439273 https://vimeo.com/87389090 [vimeo] 87389090: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/87389090 Skipped invalid link or other error:https://vimeo.com/87389090 https://vimeo.com/335874600 [vimeo] 335874600: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/335874600 Skipped invalid link or other error:https://vimeo.com/335874600 https://vimeo.com/114053015 [vimeo] 114053015: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/114053015 Skipped invalid link or other error:https://vimeo.com/114053015 https://vimeo.com/160578133 [vimeo] 160578133: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/160578133 Skipped invalid link or other error:https://vimeo.com/160578133 https://vimeo.com/148058982 [vimeo] 148058982: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/148058982 Skipped invalid link or other error:https://vimeo.com/148058982 https://vimeo.com/150225201 [vimeo] 150225201: Downloading webpage ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out'))) youtube_dl error:https://vimeo.com/150225201 Skipped invalid link or other error:https://vimeo.com/150225201 https://vimeo.com/145096806

    opened by wscffaa 1
  • Resume training

    Resume training

    Thanks for the nice code! I wonder if there are any settings for resuming from the checkpoint? Sometimes it could be a problem if the training process breaks down unexpectedly.

    opened by IceClear 1
  • Reason for padding in forward function of FRNet

    Reason for padding in forward function of FRNet

    Hi...Thanks for sharing this code. What is the reason for padding the flow image if it is not multiple of 8 in the forward function of FRNet?

     # estimate lr flow (lr_curr -> lr_prev)
            lr_flow = self.fnet(lr_curr, lr_prev)
    
            # pad if size is not a multiple of 8
            pad_h = lr_curr.size(2) - lr_curr.size(2) // 8 * 8
            pad_w = lr_curr.size(3) - lr_curr.size(3) // 8 * 8
            lr_flow_pad = F.pad(lr_flow, (0, pad_w, 0, pad_h), 'reflect')
    
    opened by mehranjeelani 1
  • Wrong model file name when using download_models.sh

    Wrong model file name when using download_models.sh

    Hi, Thank you for the great work! The links of the pretrained models in README.md are perfectly fine. But the filenames of the models using download_models.sh are wrong Also, some models are not included in the script. It might need some minor updates. Thanks again.

    opened by EasonLin536 0
  • Standardize the module structure. Make the list of requirements more complete.

    Standardize the module structure. Make the list of requirements more complete.

    • Make the repo structure use Python standards to make all scripts work (tested with Python 3.9).
    • Make it work with new scikit-image: ed1c5759dfc688caefba020a28bcf076812853ac
    • Document issues regarding specific versions of:
      • CUDA compute capability
      • CUDA Toolkit
      • NVIDIA driver
      • pytorch
      • torchvision

    I had to do all of this just to get the tests.sh command in the readme to run, so this PR is probably necessary for maintenance. Let me know if you have any comments or questions.

    The specific older versions of pytorch and torchvision in my new requirements.txt are probably fine since:

    • The project itself is rather old in terms of ML.
    • The versions I set are stated to work with cards with CUDA compute capability 3.7, and with the state of GPU prices and electronic waste, people can't be expected to buy expensive newer cards when older high-end GPUs work fine. If there are problems with these versions let me know.

    Other changes:

    • Add the GitHub-recommended Python.gitignore and add additional lines specific to the project to make commits, branching, forking, testing, etc much cleaner (ignore Python-generated and project-generated data).

    Alternative PR (not initiated): If you like pgd (I'm not familiar with it) as shown in this other fork: https://github.com/hyunobae/TecoGAN-PyTorch/commit/b79afe91d04d5a1061d42cf2e5f3f1e1d9390866 then I have a separate branch for that you can pull instead, so let me know or go ahead and start a new PR with my pgd-and-new-sckitit-image branch instead: https://github.com/poikilos/TecoGAN-PyTorch/tree/pgd-and-new-scikit-image

    opened by Poikilos 0
  • X2 Result

    X2 Result

    hello, Because of some personal reasons, I'm so sorry for being late to appreciate about providing X2 pre-trained model( #25 )

    I have tried to test X2 FRVSR and use provided pre-trained FRVSR models([BD-2x-REDS]) But the results are appalling 0004

    00000050

    I also follow "experiments_BD/ FRVSR/FRVSR_REDS_2xSR_2GPU/train.yml " to train and the results are the same as above

    Do the artifacts appear on your results?

    opened by Burgerting 2
  • Develop

    Develop

    New feature:

    1. add eta time.
    2. add flake8 code check.

    Refactor:

    1. init lmdb in init.
    2. Avoid downloading data set files repeatedly.
    3. save jpeg image to lmdb.
    4. update scikit-image to 1.16.X

    Fix:

    1. warning of lr.step() before opt.stop().
    opened by waitxxxx 0
  • RuntimeError: DataLoader worker (pid(s) 2126) exited unexpectedly

    RuntimeError: DataLoader worker (pid(s) 2126) exited unexpectedly

    I followed the mentioned commands in readme.md I get the following error:

    ` bash ./test.sh BD TecoGAN 2021-06-24 12:44:09,507 [INFO]: ======================================== 2021-06-24 12:44:09,507 [INFO]: Testing model: TecoGAN_BD_iter500000 2021-06-24 12:44:09,507 [INFO]: ======================================== 2021-06-24 12:44:10,750 [INFO]: Testing on test1: Vid4 ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm). ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm). Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 986, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/opt/conda/lib/python3.8/queue.py", line 179, in get self.not_empty.wait(remaining) File "/opt/conda/lib/python3.8/threading.py", line 306, in wait gotit = waiter.acquire(True, timeout) File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 2126) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "./codes/main.py", line 315, in test(opt) File "./codes/main.py", line 180, in test for data in test_loader: File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1182, in _next_data idx, data = self._get_data() File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1138, in _get_data success, data = self._try_get_data() File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 999, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 2126) exited unexpectedly ` I can assure that there was no memory insufficiency and the problem seems to be something else.

    Also there is a change in the function name in the skimage package, which was solved here: https://stackoverflow.com/a/59065449

    opened by santosh-shriyan 0
Owner
null
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Unofficial PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution

PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution [arXiv 2021].

Christoph Reich 122 Dec 12, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

null 71 Dec 19, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 3, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the official implementation ESPCN and TecoGAN for more information.

null 789 Jan 4, 2023
Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

RawVSR This repo contains the official codes for our paper: Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference Xiaoh

Xiaohong Liu 23 Oct 8, 2022
BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

BasicVSR BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond Ported from https://github.com/xinntao/BasicSR Dependencie

Holy Wu 8 Jun 7, 2022
BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Holy Wu 35 Jan 1, 2023
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 6 Oct 20, 2021
Fast and Context-Aware Framework for Space-Time Video Super-Resolution (VCIP 2021)

Fast and Context-Aware Framework for Space-Time Video Super-Resolution Preparation Dependencies PyTorch 1.2.0 CUDA 10.0 DCNv2 cd model/DCNv2 bash make

Xueheng Zhang 1 Mar 29, 2022
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

Xinyi Ying 28 Dec 15, 2022
Official repository of "BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment"

BasicVSR_PlusPlus (CVPR 2022) [Paper] [Project Page] [Code] This is the official repository for BasicVSR++. Please feel free to raise issue related to

Kelvin C.K. Chan 227 Jan 1, 2023
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022