PyTorch implementation of Super SloMo by Jiang et al.

Overview

Super-SloMo MIT Licence

PyTorch implementation of "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" by Jiang H., Sun D., Jampani V., Yang M., Learned-Miller E. and Kautz J. [Project] [Paper]

Check out our paper "Deep Slow Motion Video Reconstruction with Hybrid Imaging System" published in TPAMI.

Results

Results on UCF101 dataset using the evaluation script provided by paper's author. The get_results_bug_fixed.sh script was used. It uses motions masks when calculating PSNR, SSIM and IE.

Method PSNR SSIM IE
DVF 29.37 0.861 16.37
SepConv - L_1 30.18 0.875 15.54
SepConv - L_F 30.03 0.869 15.78
SuperSloMo_Adobe240fps 29.80 0.870 15.68
pretrained mine 29.77 0.874 15.58
SuperSloMo 30.22 0.880 15.18

Prerequisites

This codebase was developed and tested with pytorch 0.4.1 and CUDA 9.2 and Python 3.6. Install:

For GPU, run

conda install pytorch=0.4.1 cuda92 torchvision==0.2.0 -c pytorch

For CPU, run

conda install pytorch-cpu=0.4.1 torchvision-cpu==0.2.0 cpuonly -c pytorch

Training

Preparing training data

In order to train the model using the provided code, the data needs to be formatted in a certain manner. The create_dataset.py script uses ffmpeg to extract frames from videos.

Adobe240fps

For adobe240fps, download the dataset, unzip it and then run the following command

python data\create_dataset.py --ffmpeg_dir path\to\folder\containing\ffmpeg --videos_folder path\to\adobe240fps\videoFolder --dataset_folder path\to\dataset --dataset adobe240fps

Custom

For custom dataset, run the following command

python data\create_dataset.py --ffmpeg_dir path\to\folder\containing\ffmpeg --videos_folder path\to\adobe240fps\videoFolder --dataset_folder path\to\dataset

The default train-test split is 90-10. You can change that using command line argument --train_test_split.

Run the following commmand for help / more info

python data\create_dataset.py --h

Training

In the train.ipynb, set the parameters (dataset path, checkpoint directory, etc.) and run all the cells.

or to train from terminal, run:

python train.py --dataset_root path\to\dataset --checkpoint_dir path\to\save\checkpoints

Run the following commmand for help / more options like continue from checkpoint, progress frequency etc.

python train.py --h

Tensorboard

To get visualization of the training, you can run tensorboard from the project directory using the command:

tensorboard --logdir log --port 6007

and then go to https://localhost:6007.

Evaluation

Pretrained model

You can download the pretrained model trained on adobe240fps dataset here.

Video Converter

You can convert any video to a slomo or high fps video (or both) using video_to_slomo.py. Use the command

# Windows
python video_to_slomo.py --ffmpeg path\to\folder\containing\ffmpeg --video path\to\video.mp4 --sf N --checkpoint path\to\checkpoint.ckpt --fps M --output path\to\output.mkv

# Linux
python video_to_slomo.py --video path\to\video.mp4 --sf N --checkpoint path\to\checkpoint.ckpt --fps M --output path\to\output.mkv

If you want to convert a video from 30fps to 90fps set fps to 90 and sf to 3 (to get 3x frames than the original video).

Run the following commmand for help / more info

python video_to_slomo.py --h

You can also use eval.py if you do not want to use ffmpeg. You will instead need to install opencv-python using pip for video IO. A sample usage would be:

python eval.py data/input.mp4 --checkpoint=data/SuperSloMo.ckpt --output=data/output.mp4 --scale=4

Use python eval.py --help for more details

More info TBA

References:

Parts of the code is based on TheFairBear/Super-SlowMo

Comments
  • Issue with the output when evaluating with the pretrained model

    Issue with the output when evaluating with the pretrained model

    Hi, forgive me if this sounds a bit dense, I don't really know anything about machine learning but I have a passing interest in making slomo videos. I'm currently evaluating on a CPU (AMD Threradripper 1920x) as I don't have a CUDA device but I'm having trouble with the output from the video_to_slomo.py script. As a reference, I used ffmpeg to convert your original gif to mp4 and tried to create a video but the output looks a bit off. This is what the converted video looks like: Link here. Another video I tried also looks like this. Any ideas why this is?

    Thanks

    bug 
    opened by ybabs 33
  • about the output result

    about the output result

    Thanks for your work. I tried my own video like: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/chenchen/PycharmProjects/Super-SloMo/0103_1.mov': Metadata: major_brand : qt
    minor_version : 0 compatible_brands: qt
    creation_time : 2019-01-03 03:05:52 Duration: 00:00:40.33, start: 0.000000, bitrate: 5425 kb/s Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 90 kb/s (default) Metadata: creation_time : 2019-01-03 03:05:52 handler_name : Core Media Data Handler Stream #0:1(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 960x540, 5328 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default) Metadata: rotate : 90 creation_time : 2019-01-03 03:05:52 handler_name : Core Media Data Handler encoder : H.264 Side data: displaymatrix: rotation of -90.00 degrees Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default) Metadata: creation_time : 2019-01-03 03:05:52 handler_name : Core Media Data Handler Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default) Metadata: creation_time : 2019-01-03 03:05:52 handler_name : Core Media Data Handler

    and set fps = 240, sf = 8. But the output video has only one frame(seems like the first frame of the original video) . I wonder if there's anything wrong about my configuration?

    opened by Brizel 25
  • CUDA out of memory

    CUDA out of memory

    after frequent use there was an error "CUDA out of memory" I tried to change BrenchSize but it didn't help, it could use a GPU cache cleaner

    bug 
    opened by mrtajniak 10
  • valPSNR increase very slowly

    valPSNR increase very slowly

    Nice work thanks!

    I have a question about convergence speed

    I've run your training code and almost passed 50 epoch. The valPSNR is still under 15.5.

    Is that normal?

    also created clips seems to be not sequencial is that ok?

    duplicate 
    opened by hogeman2 9
  • cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

    cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

    Describe the bug After running script, Error "cuDNN error: CUDNN_STATUS_EXECUTION_FAILED" appears.

    To Reproduce Steps to reproduce the behavior: Run script

    Expected behavior Generate video with 120fps

    Interpolated results/error output

    (base) C:\Users\Amos\SloMo\SuperSloMo>python video_to_slomo.py --ffmpeg C:\Users\Amos\SloMo\ffmpeg\bin\ --video C:\Users\Amos\SloMo\Input\beachvideo.mp4 --sf 5 --checkpoint C:\Users\Amos\SloMo\SuperSloMo\SuperSloMo.ckpt --fps 120 --output C:\Users\Amos\SloMo\Output\beachvideo120.mp4 --batch_size 1 C:\Users\Amos\SloMo\ffmpeg\bin\ffmpeg -i C:\Users\Amos\SloMo\Input\beachvideo.mp4 -vsync 0 -qscale:v 2 tmpSuperSloMo\input/%06d.jpg ffmpeg version 4.1.4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 9.1.1 (GCC) 20190716 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc 55. 3.100 / 55. 3.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Amos\SloMo\Input\beachvideo.mp4': Metadata: major_brand : 3gp5 minor_version : 0 compatible_brands: 3gp5isom creation_time : 2018-07-27T13:58:06.000000Z location : +30.2159-085.8796/ location-eng : +30.2159-085.8796/ Duration: 00:00:15.70, start: 0.000000, bitrate: 35206 kb/s Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 3840x2160, 35074 kb/s, 30 fps, 30 tbr, 90k tbn, 180k tbc (default) Metadata: creation_time : 2018-07-27T13:58:06.000000Z Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 124 kb/s (default) Metadata: creation_time : 2018-07-27T13:58:06.000000Z Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native)) Press [q] to stop, [?] for help [swscaler @ 0000016d53584240] deprecated pixel format used, make sure you did set range correctly Output #0, image2, to 'tmpSuperSloMo\input/%06d.jpg': Metadata: major_brand : 3gp5 minor_version : 0 compatible_brands: 3gp5isom location-eng : +30.2159-085.8796/ location : +30.2159-085.8796/ encoder : Lavf58.20.100 Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 3840x2160, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc (default) Metadata: creation_time : 2018-07-27T13:58:06.000000Z encoder : Lavc58.35.100 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1 frame= 471 fps= 35 q=2.0 Lsize=N/A time=00:00:15.70 bitrate=N/A speed=1.18x video:268721kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown 0%| | 0/470 [00:00<?, ?it/s] Traceback (most recent call last): File "video_to_slomo.py", line 216, in main() File "video_to_slomo.py", line 165, in main flowOut = flowComp(torch.cat((I0, I1), dim=1)) File "C:\Users\Amos\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\Amos\SloMo\SuperSloMo\model.py", line 197, in forward x = F.leaky_relu(self.conv1(x), negative_slope = 0.1) File "C:\Users\Amos\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\Amos\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

    Desktop (please complete the following information):

    • OS: Windows
    • Device Used GPU
    • Setup Info [e.g. PyTorch 1.1, CUDA 9.0, Python 3.7

    Additional context

    bug 
    opened by Fried-Penguin-Wings 8
  • ffmpeg problem

    ffmpeg problem

    Hi, Thanks for your great project.

    I got a problem, when handle the ffmpeg
    I conda install ffmpeg

    conda install -c https://conda.anaconda.org/menpo ffmpeg

    the I run the script by --ffmpeg `which ffmpeg`

    error: sh: 1: /home/xyliu/miniconda3/envs/DL/bin/ffmpeg/ffmpeg: not found .

    Could you please tell me how to fix it , thank you~

    opened by lxy5513 8
  • how to run train.ipynb?

    how to run train.ipynb?

    Hi, Avinash Paliwal I try to use 'Jupyter notebook' to open train.ipynb, but Jupyter notebook says "Unreadable Notebook: F:\Super-SloMo-master\train.ipynb NotJSONError('Notebook does not appear to be JSON: '{\n "nbformat": 4,\n "nbformat_minor"...',)" Is this a bug or there is other method to run train.ipynb ?

    bug 
    opened by zthassassin 8
  • Is there anything wrong about the speed?

    Is there anything wrong about the speed?

    Describe the bug

    To Reproduce Steps to reproduce the behavior: e.g. just run the video_to_slomo my ffmpeg is built from source and my pytorch is 1.0 with GPU

    Expected behavior there is something like "[04:25<3:00:25, 37.33s/it]" and runs really slowly

    Additional context I don't think the version of pytorch caused the speed problem. So how fast you run this model?

    opened by tingxueronghua 7
  • Convert .pytorch model to .ckpt?

    Convert .pytorch model to .ckpt?

    I was wondering if there is a .ckpt version available of the "SepConv - L_F" model, or perhaps if there is a way to convert the .pytorch model to a .ckpt one?

    opened by deama 7
  • Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor

    Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor

    I am getting this error when I am running with gpu, it is running fine with cpu though. May be in doing temporary fix for #7 , this error was introduced and I am not able to figure it out.

          handler_name    : VideoHandler
          encoder         : Lavc56.60.100 mjpeg
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
    Press [q] to stop, [?] for help
    frame=   72 fps=0.0 q=2.0 Lsize=N/A time=00:00:03.00 bitrate=N/A    
    video:2098kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
      0%|                                                                          | 0/71 [00:00<?, ?it/s]/users/TeamVideoSummarization/.local/lib/python2.7/site-packages/torch/nn/functional.py:1820: UserWarning: nn.functional.upsample_bilinear is deprecated. Use nn.functional.upsample instead.
      warnings.warn("nn.functional.upsample_bilinear is deprecated. Use nn.functional.upsample instead.")
    cuda:0
    
    Traceback (most recent call last):
      File "video_to_slomo.py", line 218, in <module>
        main()
      File "video_to_slomo.py", line 185, in main
        g_I0_F_t_0 = flowBackWarp(I0, F_t_0)
      File "/users/TeamVideoSummarization/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "/users/TeamVideoSummarization/shivansh/Super-SloMo/model.py", line 276, in forward
        x = self.gridX.unsqueeze(0).expand_as(u).float() + u
    RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #3 'other'
    

    Pardon if this was silly doubt.

    opened by Shivanshmundra 6
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    python video_to_slomo.py --ffmpeg D:\program_data\python\Super-SloMo\path\to\ffmp eg\bin --video D:\program_data\python\Super-SloMo\path\to\123.mp4 --sf 3 --checkpoint D:\program_data\python\Super-SloMo \path\to\checkpoint.ckpt --fps 72 --batch_size 1 --output D:\program_data\python\Super-SloMo\path\to\output.mp4

    D:\program_data\python\Super-SloMo\path\to\ffmpeg\bin\ffmpeg -i D:\program_data\python\Super-SloMo\path\to\123.mp4 -vsync 0 -qscale:v 2 tmpSuperSloMo\input/%06d.jpg
    ffmpeg version N-91520-gbce4da85e8 Copyright (c) 2000-2018 the FFmpeg developers
      built with gcc 7.3.1 (GCC) 20180722
      configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
      libavutil      56. 18.102 / 56. 18.102
      libavcodec     58. 21.106 / 58. 21.106
      libavformat    58. 17.101 / 58. 17.101
      libavdevice    58.  4.101 / 58.  4.101
      libavfilter     7. 26.100 /  7. 26.100
      libswscale      5.  2.100 /  5.  2.100
      libswresample   3.  2.100 /  3.  2.100
      libpostproc    55.  2.100 / 55.  2.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D:\program_data\python\Super-SloMo\path\to\123.mp4':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomM4A
        creation_time   : 2019-01-04T08:31:32.000000Z
        iTunSMPB        :  00000000 00000A40 000003AC 000000000003BA14 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
        encoder         : Nero AAC codec / 1.5.4.0
      Duration: 00:00:05.16, start: 0.000000, bitrate: 3156 kb/s
        Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv), 1920x1080, 3061 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default)
        Metadata:
          creation_time   : 2019-01-04T08:31:32.000000Z
          handler_name    : L-SMASH Video Media Handler
          encoder         : AVC Coding
        Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
        Metadata:
          creation_time   : 2019-01-04T08:31:32.000000Z
          handler_name    : Sound Media Handler
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
    Press [q] to stop, [?] for help
    [swscaler @ 00000194fd361f80] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'tmpSuperSloMo\input/%06d.jpg':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomM4A
        iTunSMPB        :  00000000 00000A40 000003AC 000000000003BA14 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
        encoder         : Lavf58.17.101
        Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 1920x1080, q=2-31, 200 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
        Metadata:
          creation_time   : 2019-01-04T08:31:32.000000Z
          handler_name    : L-SMASH Video Media Handler
          encoder         : Lavc58.21.106 mjpeg
        Side data:
          cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
    frame=  122 fps=0.0 q=2.0 Lsize=N/A time=00:00:05.08 bitrate=N/A speed=6.92x
    video:28266kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    Traceback (most recent call last):
      File "video_to_slomo.py", line 209, in <module>
        main()
      File "video_to_slomo.py", line 158, in main
        flowOut = flowComp(torch.cat((I0, I1), dim=1))
      File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "D:\program_data\python\Super-SloMo\model.py", line 199, in forward
        s2 = self.down1(s1)
      File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "D:\program_data\python\Super-SloMo\model.py", line 71, in forward
        x = F.leaky_relu(self.conv2(x), negative_slope = 0.1)
      File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward
        self.padding, self.dilation, self.groups)
    RuntimeError: CUDA error: out of memory
    

    my GPU is GTX 1060

    opened by gaowanliang 6
  • Frames are not extracted for any input video

    Frames are not extracted for any input video

    Hey, when running the video_to_slomo.py for the already trained model I am getting the following error

    File "D:\COMMAS\Thesis\Super-SloMo-master\video_to_slomo.py", line 232, in main() File "D:\COMMAS\Thesis\Super-SloMo-master\video_to_slomo.py", line 152, in main videoFrames = dataloader.Video(root=extractionPath, transform=transform) File "D:\COMMAS\Thesis\Super-SloMo-master\dataloader.py", line 465, in init frame = _pil_loader(framesPath[0]) IndexError: list index out of range

    When I checked the framesPath which is tmpSuperSloMo/input, the input folder is empty. I am assuming that the frames are not extracted from the video.

    Does anybody have any idea to solve this issue?

    opened by UsaidHammad 0
  • Runtime Error (Traceback Attached)

    Runtime Error (Traceback Attached)

    Thank you very much for this project. I love it, and I've been using it successfully since early January. However, in the past week or so, it has stopped working on every colab fork I've tried. Perhaps there has been an incompatible version update in a dependency somewhere?

    Describe the bug After loading the dependencies, I input the file path to the uploaded video I want to interpolate. On execution, I get the attached error, which is a standard error being thrown by the process.communicate() method in ().

    To Reproduce Behavior happens every time. Colab is not running out of memory, as I use CO Pro+ and I have taken the images down to 100x100 pixels and 50 total frames to ensure the file is sufficiently small to rule out running out of RAM as the issue.

    1. Visit Colab at: https://colab.research.google.com/github/MSFTserver/AI-Colab-Notebooks/blob/main/Super_SloMo.ipynb#scrollTo=Wz4BaariVdh5
    2. Run cells in "Download Super-Slomo Repo & Model"
    3. Run cells in "Run this block and Upload Video by clicking the Button that pops up below this codeblock! Wait till it loads the video and once it's done run the next block"
    4. Navigate in dialog box to file and upload file to server.
    5. No need to enter file path in this particular notebook as the upload itself conveys the file path.
    6. Run the main code.
    7. Error/Abnormal behavior

    Expected behavior Up until this week when the error started, the colab output 2 files... the first being the .mkv file that the program encodes natively and then the .mp4 that results from conversion.

    Interpolated results/error output Super Slow Mo Error

    Desktop (please complete the following information):

    • OS: Windows 10 Pro, MacOS Monterey 12.2.1
    • Device Used: Colab Pro+ GPU
    • Setup Info: All dependencies needed for this notebook is pulled in Step 1 above, and noted in the attached error report/traceback.

    Additional context No additional context other than I have been having the same issue in each notebook that I am aware of which fork off the main repository, not just the one I've linked to here.

    Thank you for your help, I really miss this utility!

    Cheers

    bug 
    opened by Craig-Leko 23
  • PackagesNotFoundError: torchvision-cpu==0.2.0

    PackagesNotFoundError: torchvision-cpu==0.2.0

    I try to follow your installation instructions: I download and install Miniconda3-py38_4.10.3-Windows-x86_64.exe (is this not correct?) and then I execute:

    conda install pytorch-cpu=0.4.1 torchvision-cpu==0.2.0 cpuonly -c pytorch

    as instructed. But I get the error message:

    Collecting package metadata (current_repodata.json): done
    Solving environment: failed with initial frozen solve. Retrying with flexible solve.
    Collecting package metadata (repodata.json): done
    Solving environment: failed with initial frozen solve. Retrying with flexible solve.
    
    PackagesNotFoundError: The following packages are not available from current channels:
    
      - torchvision-cpu==0.2.0
    
    Current channels:
    
      - https://conda.anaconda.org/pytorch/win-64
      - https://conda.anaconda.org/pytorch/noarch
      - https://repo.anaconda.com/pkgs/main/win-64
      - https://repo.anaconda.com/pkgs/main/noarch
      - https://repo.anaconda.com/pkgs/r/win-64
      - https://repo.anaconda.com/pkgs/r/noarch
      - https://repo.anaconda.com/pkgs/msys2/win-64
      - https://repo.anaconda.com/pkgs/msys2/noarch
    
    To search for alternate channels that may provide the conda package you're
    looking for, navigate to
    
        https://anaconda.org
    
    and use the search bar at the top of the page.
    
    bug 
    opened by jmuff44xv 1
  • Error converting file:D:\Coding\JAVASCRIPT\recless\out\4fpsRecording.mkv. Exiting.

    Error converting file:D:\Coding\JAVASCRIPT\recless\out\4fpsRecording.mkv. Exiting.

    To Reproduce enter command: python video_to_slomo.py --ffmpeg D:\Coding\JAVASCRIPT\recless\ffmpeg\windows\ffmpeg.exe --video D:\Coding\JAVASCRIPT\recless\out\4fpsRecording.mkv --sf 15 --checkpoint D:\Coding\JAVASCRIPT\recless\superslowmo\SuperSloMo.ckpt --fps 60 --output D:\Coding\JAVASCRIPT\recless\out\60fpsRecording.mkv

    Context: As you can see I am trying to boost the fps of a 4fps video (in .mkv format) to 60fps (also in .mkv format)

    Desktop (please complete the following information):

    • OS: Windows
    • Device Used CPU
    • Setup Info latest stable ones
    bug 
    opened by Osiris-Team 0
Owner
Avinash Paliwal
PhD Student at Texas A&M University
Avinash Paliwal
[ICCV'2021] "SSH: A Self-Supervised Framework for Image Harmonization", Yifan Jiang, He Zhang, Jianming Zhang, Yilin Wang, Zhe Lin, Kalyan Sunkavalli, Simon Chen, Sohrab Amirghodsi, Sarah Kong, Zhangyang Wang

SSH: A Self-Supervised Framework for Image Harmonization (ICCV 2021) code for SSH Representative Examples Main Pipeline RealHM DataSet Google Drive Pr

VITA 86 Dec 2, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 5, 2023
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

null 94 Nov 22, 2022
PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021.

GCResNet PyTorch implementation of Graph Convolutional Networks in Feature Space for Image Deblurring and Super-resolution, IJCNN 2021. The code will

null 11 May 19, 2022
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the official implementation ESPCN and TecoGAN for more information.

null 789 Jan 4, 2023
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

Jacob Gildenblat 196 Nov 27, 2022
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

LiangWei Jiang 2.5k Jan 2, 2023
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 62 Nov 28, 2022
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

MEGVII Research 141 Dec 30, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

null 165 Dec 17, 2022