Using VapourSynth with super resolution models and speeding them up with TensorRT.

Overview

VSGAN-tensorrt-docker

Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined with rlaphoenix/VSGAN. This repo makes the usage of tiling and ESRGAN models very easy. Models can be found on the wiki page. Further model architectures are planned to be added later on.

Currently working:

  • ESRGAN
  • RealESRGAN (adjust model load manually in inference.py, settings wont be adjusted automatically currently)

Usage:

# install docker, command for arch
yay -S docker nvidia-docker nvidia-container-toolkit
# Put the dockerfile in a directory and run that inside that directory
docker build -t vsgan_tensorrt:latest .
# run with a mounted folder
docker run --privileged --gpus all -it --rm -v /home/Desktop/tensorrt:/workspace/tensorrt vsgan_tensorrt:latest
# you can use it in various ways, ffmpeg example
vspipe --y4m inference.py - | ffmpeg -i pipe: example.mkv

If docker does not want to start, try this before you use docker:

# fixing docker errors
systemctl start docker
sudo chmod 666 /var/run/docker.sock

Windows is mostly similar, but the path needs to be changed slightly:

Example for C://path
docker run --privileged --gpus all -it --rm -v //c/path:/workspace/tensorrt vsgan_tensorrt:latest

If you don't want to use docker, vapoursynth install commands are here and a TensorRT example is here.

Set the input video path in inference.py and access videos with the mounted folder.

It is also possible to directly pipe the video into mpv, but you most likely wont be able to archive realtime speed. Change the mounted folder path to your own videofolder and use the mpv dockerfile instead. If you use a very efficient model, it may be possible on a very good GPU. Only tested in Manjaro.

yay -S pulseaudio

# i am not sure if it is needed, but go into pulseaudio settings and check "make pulseaudio network audio devices discoverable in the local network" and reboot

# start docker
docker run --rm -i -t \
    --network host \
    -e DISPLAY \
    -v /home/Schreibtisch/test/:/home/mpv/media \
    --ipc=host \
    --privileged \
    --gpus all \
    -e PULSE_COOKIE=/run/pulse/cookie \
    -v ~/.config/pulse/cookie:/run/pulse/cookie \
    -e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native \
    -v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native \
    vsgan_tensorrt:latest
    
# run mpv
vspipe --y4m inference.py - | mpv -
Comments
  • Invalid data found when processing input

    Invalid data found when processing input

    Hey when i start the inference.py script this happen :

    someone can help me ?

    
    > ffmpeg version N-62110-g4d45f5acbd-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2022 the FFmpeg developers
    >   built with gcc 8 (Debian 8.3.0-6)
    >   configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
    >   libavutil      57. 26.100 / 57. 26.100
    >   libavcodec     59. 33.100 / 59. 33.100
    >   libavformat    59. 24.100 / 59. 24.100
    >   libavdevice    59.  6.100 / 59.  6.100
    >   libavfilter     8. 40.100 /  8. 40.100
    >   libswscale      6.  6.100 /  6.  6.100
    >   libswresample   4.  6.100 /  4.  6.100
    >   libpostproc    56.  5.100 / 56.  5.100
    > Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/Grammar.txt
    > Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/PatternGrammar.txt
    > Script evaluation failed:
    > Python exception: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
    > 
    > Traceback (most recent call last):
    >   File "src\cython\vapoursynth.pyx", line 2890, in vapoursynth._vpy_evaluate
    >   File "src\cython\vapoursynth.pyx", line 2891, in vapoursynth._vpy_evaluate
    >   File "inference.py", line 85, in <module>
    >     clip = ESRGAN_inference(clip=clip, model_path="/workspace/RealESRGAN_x4plus_anime_6B.pth", tile_x=480, tile_y=480, tile_pad=16, fp16=False, tta=False, tta_mode=1)
    >   File "/workspace/tensorrt/src/esrgan.py", line 680, in ESRGAN_inference
    >     import torch_tensorrt
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/__init__.py", line 11, in <module>
    >     from torch_tensorrt._compile import *
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_compile.py", line 2, in <module>
    >     from torch_tensorrt import _enums
    >   File "/usr/local/lib/python3.8/dist-packages/torch_tensorrt/_enums.py", line 1, in <module>
    >     from torch_tensorrt._C import dtype, DeviceType, EngineCapability, TensorFormat
    > ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
    > 
    > pipe:: Invalid data found when processing input
    
    
    opened by NeoBurgerYT 10
  • Module not found 'scipy'

    Module not found 'scipy'

    I can't run my inference.py without getting this error message. Can someone direct me to where I can get the repo?

    File "/usr/local/lib/python3.8/dist-packages/mmedit/core/evaluation/metrics.py", line 7, in from scipy.ndimage import convolve ModuleNotFoundError: No module named 'scipy'

    pipe:: Invalid data found when processing input

    opened by terminatedkhla 8
  • Tutorial?

    Tutorial?

    Hi! This is amazing technology! I’m blown away. I’d love to contact you directly on how to use it in colab, I’m quite confused with the process. I’ve tried running it but not sure I’m running it correctly. Thanks in advance!

    opened by AIManifest 6
  • Trying On A M1 Mac

    Trying On A M1 Mac

    So I followed this tutorial https://www.youtube.com/watch?v=B134jvhO8yk&t=0s But when docker run --privileged --gpus all -it --rm -v /home/vsgan_path/:/workspace/tensorrt styler00dollar/vsgan_tensorrt:latest it just gives me an error that it doesn't find the right amd64 or somthing and I rage quit deleted it without seeing the full error. PLS HELP ME :(

    opened by Ghostkwebb 6
  • Crash when using RIFE ensemble models in vsmlrt

    Crash when using RIFE ensemble models in vsmlrt

    I get this error

    vapoursynth.Error: operator (): expects 8 input planes
    

    from this

    import vapoursynth as vs
    from vapoursynth import core
    core = vs.core
    import vsmlrt
    
    clip = core.lsmas.LWLibavSource(source=r"R:\output.mkv",cache=1, prefer_hw=1)
    clip = core.resize.Bicubic(clip, matrix_in_s="709", transfer_in_s='709', format=vs.RGBS)
    clip = vsmlrt.RIFE(clip, multi=4, model=46, backend=vsmlrt.Backend.TRT(fp16=True), tilesize=[1920,1088])
    clip = core.std.AssumeFPS(clip=clip, fpsnum=60, fpsden=1)
    clip = core.resize.Bicubic(clip, format=vs.RGB24, matrix_in_s="709")
    clip.set_output()
    
    opened by banjaminicc 4
  • Support for AITemplate?

    Support for AITemplate?

    There is something that came out recently and it's look promising in terms of performance/speed. Would it be possible to implement it for ESERGAN mode? https://github.com/facebookincubator/AITemplate

    opened by kodxana 4
  • CUDA out of Memory

    CUDA out of Memory

    System Specs: Ryzen 9 5900HX, NVidia 3070 Mobile, Arch Linux (EndeavorOS) on Kernel 5.17.2

    Whenever I try to run a model that is relying on CUDA, for example cugan, the program exits with

    Error: Failed to retrieve frame 0 with error: CUDA out of memory. Tried to allocate 148.00 MiB (GPU 0; 7.80 GiB total capacity; 5.53 GiB already allocated; 68.56 MiB free; 5.69 GiB reserved in total by PyTorch)

    and stops after having output 4 frames.

    However, TensorRT works fine for models that support it (like RealESRGAN for example).

    Edit: Running nvidia-smi while the command is executed reveals that vspipe is allocating GPU Memory, but <2 GiB of VRAM, far from the 8GiB my model has.

    opened by mmkzer0 4
  • No module named 'vsbasicvsrpp'

    No module named 'vsbasicvsrpp'

    Traceback (most recent call last): File "src\cython\vapoursynth.pyx", line 2832, in vapoursynth._vpy_evaluate File "src\cython\vapoursynth.pyx", line 2833, in vapoursynth._vpy_evaluate File "inference.py", line 12, in from vsbasicvsrpp import BasicVSRPP ModuleNotFoundError: No module named 'vsbasicvsrpp'

    opened by xt851231 4
  • Google colab request?

    Google colab request?

    I recently stumbled upon this VSGAN-tensorrt-docker and found it so incredible! Could anyone make a google colab notebook that features everything from this VSGAN-tensorrt-docker, so that we could experience the speed of TensorRT! Thanks in advance!

    opened by mikebilly 3
  • model conversion from onnx to trt

    model conversion from onnx to trt

    @styler00dollar this is not issue but a question, I read the scripts in inference.py and found real-esrgan 2x is loaded from trt engine file, since real-2x uses dynamic shapes as input, could you share any ideas how to convert this model to trt, thanks!

    opened by deism 3
  • ESRGAN with full episode

    ESRGAN with full episode

    Hello,

    I'm trying to upscale MKV files of full episodes with ESRGAN. I tried using vspipe -c y4m inference.py - | ffmpeg -i pipe: example.mkv, and it seems to run up to the point where it starts to give an ETA. Once there the time doesn't move and eventually, it says it was killed.

    Can you give me some tips on how to make this work better? I'm not familiar with most of the tools I've been given.

    opened by Ultramonte 2
  • [SUGGESTION] per-scene processing

    [SUGGESTION] per-scene processing

    Hi there, this project is awesome so thanks for your - voluntary - work !

    Since GANs-based processing is quite heavy computing task, it could be very useful to split it into multiple "segments" to allow parallel/scalable/collaborative/resumable instances.

    We suggest you to check @master-of-zen's Av1an framework, wich implements it.

    Hope that inspires.

    opened by forart 1
Releases(models)
Owner
I like Google Colab and Python.
null
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Convenient tool for speeding up the intern/officer review process.

icpc-app-screen Convenient tool for speeding up the intern/officer applicant review process. Eliminates the pain from reading application responses of

null 1 Oct 30, 2021
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model inference.

PyTorch Infer Utils This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model infer

Alex Gorodnitskiy 11 Mar 20, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

Jacob Gildenblat 196 Nov 27, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 9, 2023
An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020

UnpairedSR An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020 turn RCAN(modified) --> xmodel(xilinx

JiaKui Hu 10 Oct 28, 2022
Image Super-Resolution Using Very Deep Residual Channel Attention Networks

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

kongdebug 14 Oct 14, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

null 7 Feb 10, 2022
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

null 4.2k Jan 1, 2023
A high-performance anchor-free YOLO. Exceeding yolov3~v5 with ONNX, TensorRT, NCNN, and Openvino supported.

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

null 7.7k Jan 6, 2023
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

null 7.7k Jan 3, 2023
This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.

Swin Transformer This project aims to explore the deployment of SwinTransformer based on TensorRT, including the test results of FP16 and INT8. Introd

maggiez 87 Dec 21, 2022