Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

Overview

RealBasicVSR

[Paper]

This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contains colab, video demos and updates of our work.

Authors: Kelvin C.K. Chan, Shangchen Zhou, Xiangyu Xu, Chen Change Loy, Nanyang Technological University

News

  • Nov 2021: Initialize with video demos

Video Demos

The videos have been compressed. Therefore, the results are infereior to that of the actual outputs.

output.mp4
output.mp4
output.mp4
output.mp4

Code

This code is built upon MMEditing. The code will appear in MMEditing soon. Please follow and star this repository and MMEditing for the latest news!

VideoLQ Dataset

You can download the dataset using our Dropbox link.

Citations

@InProceedings{chan2021investigating,
  author = {Chan, Kelvin C.K. and Zhou, Shangchen and Xu, Xiangyu and Loy, Chen Change},
  title = {Investigating Tradeoffs in Real-World Video Super-Resolution},
  booktitle = {arXiv preprint arXiv:2111.12704},
  year = {2021}
}
Comments
  • Missing file (crop_sub_images.py).

    Missing file (crop_sub_images.py).

    Thanks for your great work, but it seems to lack a related file (crop_sub_images.py) in this project for training. Could you upload this file? I would appreciate it.

    opened by sunlustar 10
  • Fail to download dataset in Dropbox

    Fail to download dataset in Dropbox

    Hi, thanks for the excellent work! But could you please release the dataset on Google Drive, too? I can't download the dropbox link in China... Thanks very much!

    opened by Guanner 9
  •   ERROR: Failed building wheel for mmcv-full

    ERROR: Failed building wheel for mmcv-full

    when using

    mim install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.10.0/index.html

    I am looking to setup a space on huggingface spaces(https://huggingface.co/spaces) for this model at https://huggingface.co/spaces/akhaliq/RealBasicVSR

    I was able to get the model working in colab but the space does not support cuda, is there a way around this thanks?

    space code: https://huggingface.co/spaces/akhaliq/RealBasicVSR/blob/main/app.py#L4

    opened by AK391 7
  • About the evaluation

    About the evaluation

    I used the official NIQE code to evaluate the demo_000 and the result, got a unexpected result, as the niqe value of the raw video is 3.9829 while the sr video is 4.3407. I just input every frame and calculate the average value. I don't know where is wrong, as this result is totally opposite towards that in paper.

    opened by CrissyHoo 7
  • Process after saving the checkpoint

    Process after saving the checkpoint

    Hi. First of all, thank you very much for your project. The quality is impressive. I'm trying to train a neural network, but after every save checkpoint it starts some long process for 300 iterations. It looks like evaluating, but I couldn't find a value of 300 in the config file. I train on the REDS dataset (24k images) and that process takes longer than the training itself for 10k iterations. What is it? Is there any way to reduce this value (300)? Is it possible to disable it and what is the risk?

    Example: screen

    opened by fellow-tom 7
  • Getting some errors with the inference

    Getting some errors with the inference

    Hi there,thanks for the work.But getting some errors....

    packages in environment at conda\envs\realbasicvsr:

    Name Version Build Channel

    absl-py 1.0.0 pypi_0 pypi addict 2.4.0 pypi_0 pypi blas 1.0 mkl ca-certificates 2021.10.26 haa95532_2 cachetools 4.2.4 pypi_0 pypi certifi 2021.10.8 py37haa95532_0 charset-normalizer 2.0.10 pypi_0 pypi click 7.1.2 pypi_0 pypi colorama 0.4.4 pypi_0 pypi cudatoolkit 10.1.243 h74a9793_0 freetype 2.10.4 hd328e21_0 google-auth 2.3.3 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi grpcio 1.43.0 pypi_0 pypi idna 3.3 pypi_0 pypi imageio 2.13.5 pypi_0 pypi importlib-metadata 4.10.0 pypi_0 pypi intel-openmp 2021.4.0 haa95532_3556 jpeg 9b hb83a4c4_2 libpng 1.6.37 h2a8f88b_0 libtiff 4.2.0 hd0e1b90_0 libuv 1.40.0 he774522_0 libwebp 1.2.0 h2bbff1b_0 lmdb 1.3.0 pypi_0 pypi lz4-c 1.9.3 h2bbff1b_1 markdown 3.3.6 pypi_0 pypi mkl 2021.4.0 haa95532_640 mkl-service 2.4.0 py37h2bbff1b_0 mkl_fft 1.3.1 py37h277e83a_0 mkl_random 1.2.2 py37hf11a4ad_0 mmcv-full 1.4.2 pypi_0 pypi mmedit 0.12.0 pypi_0 pypi model-index 0.1.11 pypi_0 pypi networkx 2.6.3 pypi_0 pypi ninja 1.10.2 py37h559b2a2_3 numpy 1.21.2 py37hfca59bb_0 numpy-base 1.21.2 py37h0829f74_0 oauthlib 3.1.1 pypi_0 pypi olefile 0.46 py37_0 opencv-python-headless 4.5.4.60 pypi_0 pypi openmim 0.1.5 pypi_0 pypi openssl 1.1.1l h2bbff1b_0 ordered-set 4.0.2 pypi_0 pypi packaging 21.3 pypi_0 pypi pandas 1.3.5 pypi_0 pypi pillow 8.4.0 py37hd45dc43_0 pip 21.2.4 py37haa95532_0 protobuf 3.19.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.6 pypi_0 pypi python 3.7.11 h6244533_0 python-dateutil 2.8.2 pypi_0 pypi pytorch 1.7.1 py3.7_cuda101_cudnn7_0 pytorch pytz 2021.3 pypi_0 pypi pywavelets 1.2.0 pypi_0 pypi pyyaml 6.0 pypi_0 pypi regex 2021.11.10 pypi_0 pypi requests 2.27.1 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.8 pypi_0 pypi scikit-image 0.19.1 pypi_0 pypi scipy 1.7.3 pypi_0 pypi setuptools 58.0.4 py37haa95532_0 six 1.16.0 pyhd3eb1b0_0 sqlite 3.37.0 h2bbff1b_0 tabulate 0.8.9 pypi_0 pypi tensorboard 2.7.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tifffile 2021.11.2 pypi_0 pypi tk 8.6.11 h2bbff1b_0 torchaudio 0.7.2 py37 pytorch torchvision 0.8.2 py37_cu101 pytorch typing_extensions 3.10.0.2 pyh06a4308_0 urllib3 1.26.7 pypi_0 pypi vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 werkzeug 2.0.2 pypi_0 pypi wheel 0.37.0 pyhd3eb1b0_1 wincertstore 0.2 py37haa95532_2 xz 5.2.5 h62dcd97_0 yapf 0.32.0 pypi_0 pypi zipp 3.7.0 pypi_0 pypi zlib 1.2.11 h8cc25b3_4 zstd 1.4.9 h19a0ad4_0


    For pictures I ran the test code :

    (realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_000 results/demo_000 2022-01-07 06:36:34,070 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19 load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth

    it did nothing.

    For video I ran the test code setting the --max_seq_len=2 :

    (realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --max_seq_len=2 --fps=12.5 2022-01-07 06:38:02,236 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19 load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth Traceback (most recent call last): File "inference_realbasicvsr.py", line 144, in main() File "inference_realbasicvsr.py", line 130, in main cv2.destroyAllWindows() cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1268: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'

    it gave this error.


    For video I ran the test code default :

    (realbasicvsr) C:\RealBasicVSR>python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5 2022-01-07 06:40:14,850 - mmedit - INFO - load checkpoint from torchvision path: torchvision://vgg19 load checkpoint from local path: checkpoints/RealBasicVSR_x4.pth Traceback (most recent call last): File "inference_realbasicvsr.py", line 144, in main() File "inference_realbasicvsr.py", line 117, in main outputs = model(inputs, test_mode=True)['output'].cpu() File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\runner\fp16_utils.py", line 98, in new_func return old_func(*args, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\srgan.py", line 95, in forward return self.forward_test(lq, gt, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\restorers\real_esrgan.py", line 211, in forward_test output = _model(lq) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\real_basicvsr_net.py", line 87, in forward outputs = self.basicvsr(lqs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 126, in forward flows_forward, flows_backward = self.compute_flow(lrs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 98, in compute_flow flows_backward = self.spynet(lrs_1, lrs_2).view(n, t - 1, 2, h, w) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 346, in forward input=self.compute_flow(ref, supp), File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 315, in compute_flow ], 1)) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmedit\models\backbones\sr_backbones\basicvsr_net.py", line 420, in forward return self.basic_module(tensor_input) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\container.py", line 117, in forward input = module(input) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\mmcv\cnn\bricks\conv_module.py", line 201, in forward x = self.conv(x) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 423, in forward return self._conv_forward(input, self.weight) File "C:\Users\breakmycurse.conda\envs\realbasicvsr\lib\site-packages\torch\nn\modules\conv.py", line 420, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: CUDA out of memory. Tried to allocate 4.35 GiB (GPU 0; 11.00 GiB total capacity; 4.37 GiB already allocated; 2.46 GiB free; 7.07 GiB reserved in total by PyTorch)

    it gave an oom error.


    System is Win 10 64 bit, 1080 ti =11 gb. model is in the right folder and the environment is done with conda with the given commands in order.

    opened by FlowDownTheRiver 5
  • RuntimeError: storage has wrong size: expected 0 got 1728

    RuntimeError: storage has wrong size: expected 0 got 1728

    Exception has occurred: RuntimeError RealBasicVSR: PerceptualLoss: storage has wrong size: expected 0 got 1728

    During handling of the above exception, another exception occurred:

    File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\restorers\real_basicvsr.py", line 65, in init super().init(generator, discriminator, gan_loss, pixel_loss,

    During handling of the above exception, another exception occurred:

    File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 20, in build return build_from_cfg(cfg, registry, default_args) File "D:\Users......\RealBasicVSR-master\realbasicvsr\models\builder.py", line 58, in build_model return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 67, in init_model model = build_model(config.model, test_cfg=config.test_cfg) File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 81, in main model = init_model(args.config, args.checkpoint) File "D:\Users......\RealBasicVSR-master\inference_realbasicvsr.py", line 149, in main()

    opened by simdjeff 4
  • loss for training

    loss for training

    Hi, thanks for your wonderful work. In the paper, you use cb loss, but in the codes, you use L1 loss in the config file. Which one is the right. Have you ever tried to modify the model for x1? Looking forward for your return. Thank you.

    opened by iSmida 3
  • 'ConfigDict' object has no attribute 'model'

    'ConfigDict' object has no attribute 'model'

    Hi. I am installed that:

    conda create -n vsr3 python=3.7 -y
    conda activate vsr3
    conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch -y
    conda install -c omnia openmm -y
    #conda install -c esri mmcv-full -y
    pip install mmcv-full==1.3.17 -f https://download.openmmlab.com/mmcv/dist/11.1/torch1.10.0/index.html
    python3 -m pip install mmedit
    

    Then i run:

    python inference_realbasicvsr.py configs/realbasicvsr_x4.py checkpoints/RealBasicVSR_x4.pth data/demo_001.mp4 results/demo_001.mp4 --fps=12.5
    

    Then get an error:

    Traceback (most recent call last):
      File "inference_realbasicvsr.py", line 148, in <module>
        main()
      File "inference_realbasicvsr.py", line 80, in main
        model = init_model(args.config, args.checkpoint)
      File "inference_realbasicvsr.py", line 64, in init_model
        config.model.pretrained = None
      File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 507, in __getattr__
        return getattr(self._cfg_dict, name)
      File "/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/utils/config.py", line 48, in __getattr__
        raise ex
    AttributeError: 'ConfigDict' object has no attribute 'model'
    

    Also i check in jupyter notebook the object:

    config = mmcv.Config.fromfile(config)
    

    And config contains:

    Config (path: configs/realbasicvsr_x4.py): {'argparse': <module 'argparse' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/argparse.py'>, 'os': <module 'os' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/os.py'>, 'osp': <module 'posixpath' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/posixpath.py'>, 'sys': <module 'sys' (built-in)>, 'Pool': <bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f8f1c640d10>>, 'cv2': <module 'cv2' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/cv2/__init__.py'>, 'mmcv': <module 'mmcv' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/mmcv/__init__.py'>, 'np': <module 'numpy' from '/home/alex/anaconda3/envs/vsr3/lib/python3.7/site-packages/numpy/__init__.py'>, 'worker': <function worker at 0x7f8e83b0a170>, 'extract_subimages': <function extract_subimages at 0x7f8e83b0a200>, 'main_extract_subimages': <function main_extract_subimages at 0x7f8e83b0a440>, 'parse_args': <function parse_args at 0x7f8e83b0a050>}
    
    opened by format37 3
  • resources are always insufficient

    resources are always insufficient

    When training video material, the memory and video memory resources are always insufficient. Is there any parameter to solve this problem? almost 1min .mp4 25fps, 3MB, running on the 12GBRAM 12GvRam, will meet resources lack or how should I deal with the input video before to more easily run the code

    opened by hello-eternity 2
  • I want a information about Colab Demo environment specification

    I want a information about Colab Demo environment specification

    I want an information about Colab Demo environment specification. I just know RAM and DISK capacity. Can you notice the specification about Colab Goolge Compute Engine GPU ?(Name and any other information).

    opened by ehanbin98 2
  • about training problem

    about training problem

    Hello @ckkelvinchan , I want to retraining this model, I have a problem for about the config file UDM10 folder structure. I don't know that how to construct this folder. is it put every folder blurx4 file to merge a folder like this is correct?

    opened by Dylan-Jinx 0
  • Pre-trained checkpoint

    Pre-trained checkpoint

    I wonder that what kind of dataset you use to train the checkpoint (RealBasicVSRx4.pth) that you share through the gg drive? Is it REDS_bicubic or any else?

    opened by NamLam-L 0
  • Torch.jit.trace get the wrong model

    Torch.jit.trace get the wrong model

    Does anybody try to convert this model to ScriptModule by torch.jit.trace, we finally get it, but given the same input, the generated script module get the wrong result. During converting there are some messages like below:

    /home/hermanhe/.local/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2894.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/restorers/srgan.py:94: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      if test_mode:
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/real_basicvsr_net.py:83: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      if torch.mean(torch.abs(residues)) < self.dynamic_refine_thres:
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:118: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      assert h >= 64 and w >= 64, (
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:73: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      if lrs.size(1) % 2 == 0:
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:334: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      w_up = w if (w % 32) == 0 else 32 * (w // 32 + 1)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:335: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      h_up = h if (h % 32) == 0 else 32 * (h // 32 + 1)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:335: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
      h_up = h if (h % 32) == 0 else 32 * (h // 32 + 1)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:296: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
      flow = ref[0].new_zeros(n, 2, h // 32, w // 32)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/common/flow_warp.py:27: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      if x.size()[-2:] != flow.size()[1:3]:
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/common/flow_warp.py:41: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      grid_flow_x = 2.0 * grid_flow[:, :, :, 0] / max(w - 1, 1) - 1.0
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/common/flow_warp.py:42: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      grid_flow_y = 2.0 * grid_flow[:, :, :, 1] / max(h - 1, 1) - 1.0
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:352: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      flow[:, 0, :, :] *= float(w) / float(w_up)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:353: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      flow[:, 1, :, :] *= float(h) / float(h_up)
    /home/hermanhe/.local/lib/python3.10/site-packages/mmedit/models/backbones/sr_backbones/basicvsr_net.py:132: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
      if i < t - 1:  # no warping required for the last timestep
    

    Any advice about getting the right script module? Thanks!

    opened by Dream-math 0
Owner
Kelvin C.K. Chan
Kelvin C.K. Chan
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 1, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 9, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
Official repository for "Intriguing Properties of Vision Transformers" (2021)

Intriguing Properties of Vision Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang P

Muzammal Naseer 155 Dec 27, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 2, 2022
This is the official repository of XVFI (eXtreme Video Frame Interpolation)

XVFI This is the official repository of XVFI (eXtreme Video Frame Interpolation), https://arxiv.org/abs/2103.16206 Last Update: 20210607 We provide th

Jihyong Oh 195 Dec 29, 2022
The official repository for BaMBNet

BaMBNet-Pytorch Paper

Junjun Jiang 18 Dec 4, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"

Easy-To-Hard The official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks". Gett

Avi Schwarzschild 52 Sep 8, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

null 38 Dec 27, 2022
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

null 137 Dec 15, 2022
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 1, 2023