Automagically synchronize subtitles with video.

Overview

FFsubsync

CI Status Checked with mypy License: MIT Python Versions Documentation Status PyPI Version

Language-agnostic automatic synchronization of subtitles with video, so that subtitles are aligned to the correct starting point within the video.

Turn this: Into this:

Helping Development

At the request of some, you can now help cover my coffee expenses using the Github Sponsors button at the top, or using the below Paypal Donate button:

Donate

Install

First, make sure ffmpeg is installed. On MacOS, this looks like:

brew install ffmpeg

(Windows users: make sure ffmpeg is on your path and can be referenced from the command line!)

Next, grab the script. It should work with both Python 2 and Python 3:

pip install ffsubsync

If you want to live dangerously, you can grab the latest version as follows:

pip install git+https://github.com/smacke/ffsubsync@latest

Usage

ffs, subsync and ffsubsync all work as entrypoints:

ffs video.mp4 -i unsynchronized.srt -o synchronized.srt

There may be occasions where you have a correctly synchronized srt file in a language you are unfamiliar with, as well as an unsynchronized srt file in your native language. In this case, you can use the correctly synchronized srt file directly as a reference for synchronization, instead of using the video as the reference:

ffsubsync reference.srt -i unsynchronized.srt -o synchronized.srt

ffsubsync uses the file extension to decide whether to perform voice activity detection on the audio or to directly extract speech from an srt file.

Sync Issues

If the sync fails, the following recourses are available:

  • Try to sync assuming identical video / subtitle framerates by passing --no-fix-framerate;
  • Try passing --gss to use golden-section search to find the optimal ratio between video and subtitle framerates (by default, only a few common ratios are evaluated);
  • Try a value of --max-offset-seconds greater than the default of 60, in the event that the subtitles are out of sync by more than 60 seconds (empirically unlikely in practice, but possible).
  • Try --vad=auditok since auditok can sometimes work better in the case of low-quality audio than WebRTC's VAD. Auditok does not specifically detect voice, but instead detects all audio; this property can yield suboptimal syncing behavior when a proper VAD can work well, but can be effective in some cases.

If the sync still fails, consider trying one of the following similar tools:

  • sc0ty/subsync: does speech-to-text and looks for matching word morphemes
  • kaegi/alass: rust-based subtitle synchronizer with a fancy dynamic programming algorithm
  • tympanix/subsync: neural net based approach that optimizes directly for alignment when performing speech detection
  • oseiskar/autosubsync: performs speech detection with bespoke spectrogram + logistic regression
  • pums974/srtsync: similar approach to ffsubsync (WebRTC's VAD + FFT to maximize signal cross correlation)

Speed

ffsubsync usually finishes in 20 to 30 seconds, depending on the length of the video. The most expensive step is actually extraction of raw audio. If you already have a correctly synchronized "reference" srt file (in which case audio extraction can be skipped), ffsubsync typically runs in less than a second.

How It Works

The synchronization algorithm operates in 3 steps:

  1. Discretize both the video file's audio stream and the subtitles into 10ms windows.
  2. For each 10ms window, determine whether that window contains speech. This is trivial to do for subtitles (we just determine whether any subtitle is "on" during each time window); for the audio stream, use an off-the-shelf voice activity detector (VAD) like the one built into webrtc.
  3. Now we have two binary strings: one for the subtitles, and one for the video. Try to align these strings by matching 0's with 0's and 1's with 1's. We score these alignments as (# video 1's matched w/ subtitle 1's) - (# video 1's matched with subtitle 0's).

The best-scoring alignment from step 3 determines how to offset the subtitles in time so that they are properly synced with the video. Because the binary strings are fairly long (millions of digits for video longer than an hour), the naive O(n^2) strategy for scoring all alignments is unacceptable. Instead, we use the fact that "scoring all alignments" is a convolution operation and can be implemented with the Fast Fourier Transform (FFT), bringing the complexity down to O(n log n).

Limitations

In most cases, inconsistencies between video and subtitles occur when starting or ending segments present in video are not present in subtitles, or vice versa. This can occur, for example, when a TV episode recap in the subtitles was pruned from video. FFsubsync typically works well in these cases, and in my experience this covers >95% of use cases. Handling breaks and splits outside of the beginning and ending segments is left to future work (see below).

Future Work

Besides general stability and usability improvements, one line of work aims to extend the synchronization algorithm to handle splits / breaks in the middle of video not present in subtitles (or vice versa). Developing a robust solution will take some time (assuming one is possible). See #10 for more details.

History

The implementation for this project was started during HackIllinois 2019, for which it received an Honorable Mention (ranked in the top 5 projects, excluding projects that won company-specific prizes).

Credits

This project would not be possible without the following libraries:

  • ffmpeg and the ffmpeg-python wrapper, for extracting raw audio from video
  • VAD from webrtc and the py-webrtcvad wrapper, for speech detection
  • srt for operating on SRT files
  • numpy and, indirectly, FFTPACK, which powers the FFT-based algorithm for fast scoring of alignments between subtitles (or subtitles and video)
  • Other excellent Python libraries like argparse, rich, and tqdm, not related to the core functionality, but which enable much better experiences for developers and users.

License

Code in this project is MIT licensed.

Comments
  • ffmpeg._run.Error: ffprobe error

    ffmpeg._run.Error: ffprobe error

    While trying to run the program in Arch Linux 64 bit I see this error :

    INFO:subsync.subsync:computing alignments...
    INFO:subsync.speech_transformers:extracting speech segments from subtitles...
    Traceback (most recent call last):
      File "/home/nirjhor/.local/bin/subsync", line 11, in <module>
        load_entry_point('subsync==0.2.10', 'console_scripts', 'subsync')()
      File "/home/nirjhor/.local/lib/python3.7/site-packages/subsync/subsync.py", line 74, in main
        reference_pipe.fit_transform(args.reference)
      File "/home/nirjhor/.local/lib/python3.7/site-packages/sklearn/pipeline.py", line 393, in fit_transform
        return last_step.fit_transform(Xt, y, **fit_params)
      File "/home/nirjhor/.local/lib/python3.7/site-packages/sklearn/base.py", line 553, in fit_transform
        return self.fit(X, **fit_params).transform(X)
      File "/home/nirjhor/.local/lib/python3.7/site-packages/subsync/speech_transformers.py", line 54, in fit
        total_duration = float(ffmpeg.probe(fname)['format']['duration']) - self.start_seconds
      File "/home/nirjhor/.local/lib/python3.7/site-packages/ffmpeg/_probe.py", line 23, in probe
        raise Error('ffprobe', out, err)
    ffmpeg._run.Error: ffprobe error (see stderr output for detail)
    
    

    What I tried:

    • Installing as user in pip

    • reinstalling subsync

    • tried with --encoding utf-8 & utf-16

    What I expect to happen:

    The program should run

    What happens:

    I see the above-mentioned error

    bug 
    opened by fakeid30 21
  • There has been an error in the syncing process

    There has been an error in the syncing process

    FAILED.txt

    Additional context Using the script rsubsync.py to sync all subtitles in a folder. Most files get synced properly, but every few episodes a error seems to pop up. Screenshot

    Any ideas? Is this just because i'm syncing too many subtitles?

    out-of-sync 
    opened by maxtheking 13
  • About Python2 and Windows 10

    About Python2 and Windows 10

    I tried it on my MPB with Python2 first and it failed. But it worked fine with Python3 an I used it on a whole movie file (more than 1 GB in size and 130 minutes long). Also, i tried it on Windows but couldn't install it. So, will it ever support Windows 10?

    bug 
    opened by Suleman-Elahi 11
  • Video file appears to lack subtitle stream

    Video file appears to lack subtitle stream

    Hello,

    I am using the linuxserver bazarr docker image. I have subtitle synchonization turned on. No matter what I do It gives me this error. Even when using ffsubsync in custom processing. I've been on the discord servers of linuxserver and bazarr. This is a ffsubsync issue so that's why I'm posting this issue.

    I transcode my files with tdarr to remove embedded subtitles from the containers. So the mkv files contain no subtitles whatsoever. I just want to sync my external .srt files downloaded by bazarr.

    Does anyone know what this error means and how to fix it?

    Thanks in advance.

    image

    out-of-sync 
    opened by Shawn9347 10
  • use cchardet to autodetect text encodings

    use cchardet to autodetect text encodings

    ref: https://github.com/PyYoshi/cChardet

    While we're at it, it probably makes sense to change the default output encoding to utf-8, since this seems to be what players like VLC expect.

    enhancement 
    opened by smacke 10
  • TypeError: \>\ not supported between instances of \float\ and \NoneType\

    TypeError: \>\ not supported between instances of \float\ and \NoneType\

    Environment (please complete the following information):

    • OS: Windows Server 2019 Datacenter (with desktop experience)
    • python version: Python 3.8.5
    • subsync version: Embedded in latest version of Bazaar (0.9.0.2)

    Describe the bug After sub download in Bazarr, an error is thrown when calling the synchronization of the service. Appears to happen for all requests

    To Reproduce Download a subtitle in Bazarr with the synchronize subtitle option selected

    Expected behavior A successful run of the subsync process

    Output BAZARR an exception occurs during the synchronization process for this subtitles \nas.domain.net\nas\media\TV\The Dukes of Hazzard\The Dukes of Hazzard Season 1\The Dukes of Hazzard - S01E08 - The Big Heist WEBRip-1080p.en.srt Traceback (most recent call last):  File "p:\apps\bazarr\bazarr\subsyncer.py", line 46, in sync   result = run(self.args)  File "p:\apps\bazarr\bazarr\../libs\ffsubsync\ffsubsync.py", line 275, in run   reference_pipe.fit(args.reference)  File "p:\apps\bazarr\bazarr\../libs\ffsubsync\sklearn_shim.py", line 214, in fit   self._final_estimator.fit(Xt, y, **fit_params)  File "p:\apps\bazarr\bazarr\../libs\ffsubsync\speech_transformers.py", line 236, in fit   if simple_progress + newstuff > total_duration: TypeError: >\ not supported between instances of \float\ and \NoneType\

    Additional context Reported issue in the Bazaar discord, after speaking with morpheus65535, he recommended opening a issue on this repository for further investigation

    bug 
    opened by acomick 9
  • Specify start time for processing?

    Specify start time for processing?

    It seems to me that if there's an introduction sequence to the video (theme song, etc), subsync keys off of audio for that and moves the sub timing too early.
    Feature request here: Add an option to have subsync start its analysis at/after a specific timestamp?

    enhancement 
    opened by schmitmd 9
  • future-annotations broken

    future-annotations broken

    python2 and 3 Could not find a version that satisfies the requirement future-annotations (from ffsubsync) (from versions: ) No matching distribution found for future-annotations (from ffsubsync)

    bug 
    opened by francogrex 7
  • Unable to detect speech. Perhaps try specifying a different stream

    Unable to detect speech. Perhaps try specifying a different stream

    Environment (please complete the following information):

    • OS: Windows 10 20H2 x86
    • python version 3.8.7 (also tested with python 2.7)
    • ffsubsync bundled with Bazarr Version: 0.9.0.7
    • Media files hosted on NAS via SMB

    Describe the bug Running sync from within bazarr on a win32 machine any attempt to sync a file exits with an error:

    Traceback (most recent call last): File "c:\program files\bazarr\bazarr\subsyncer.py", line 56, in sync result = run(self.args) File "c:\program files\bazarr\bazarr\../libs\ffsubsync\ffsubsync.py", line 283, in run reference_pipe.fit(args.reference) File "c:\program files\bazarr\bazarr\../libs\ffsubsync\sklearn_shim.py", line 214, in fit self._final_estimator.fit(Xt, y, **fit_params) File "c:\program files\bazarr\bazarr\../libs\ffsubsync\speech_transformers.py", line 250, in fit raise ValueError(

    ValueError: Unable to detect speech. Perhaps try specifying a different stream / track, or a different vad.

    To Reproduce Sync on any file causes above error. Have tried with AC3 DTS AAC

    Expected behavior Sync file without error

    bug 
    opened by andonevris 6
  • I get File not found (winerror2)

    I get File not found (winerror2)

    C:>ffs 1.mkv -i 1.smi -o sync.smi [02:15:16] INFO extracting speech segments from reference '1.mkv'... ffsubsync.py:282 INFO Checking video for subtitles stream... speech_transformers.py:172 INFO [WinError 2] 지정된 파일을 찾을 수 없습니다 speech_transformers.py:177 WARNING [WinError 2] 지정된 파일을 찾을 수 없습니다 speech_transformers.py:183 Traceback (most recent call last): File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "C:\Users\pgh26\AppData\Local\Programs\Python\Python38-32\Scripts\ffs.exe_main.py", line 9, in File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\site-packages\ffsubsync\ffsubsync.py", line 387, in main return run(args)['retval'] File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\site-packages\ffsubsync\ffsubsync.py", line 283, in run reference_pipe.fit(args.reference) File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\site-packages\ffsubsync\sklearn_shim.py", line 214, in fit self._final_estimator.fit(Xt, y, **fit_params) File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\site-packages\ffsubsync\speech_transformers.py", line 211, in fit process = subprocess.Popen(ffmpeg_args, **subprocess_args(include_stdout=True)) File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\subprocess.py", line 854, in init self._execute_child(args, executable, preexec_fn, close_fds, File "c:\users\pgh26\appdata\local\programs\python\python38-32\lib\subprocess.py", line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] 지정된 파일을 찾을 수 없습니다

    C:>

    I worked on drive C and I get a File Not found error

    The path is correct and the file exists

    I did it on Windows 10, Python 3.8, and I get the same result when I do it on another Windows 10 computer

    I want to know how to solve it..

    bug 
    opened by pgh268400 6
  • raise SRTParseError(expected_start, actual_start, unmatched_content)

    raise SRTParseError(expected_start, actual_start, unmatched_content)

    Using Python 3.6.9 and the latest version of ffsubsync. Subtitles downloaded using subliminal. Subtitles are in Polish which usually means UTF-8. It works sometimes but it fails sometimes too.

    One example:

    Traceback (most recent call last): File "/root/subsync/bin/ffsubsync", line 8, in <module> sys.exit(main()) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/ffsubsync.py", line 261, in main return run(args) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/ffsubsync.py", line 117, in run for scale_factor in framerate_ratios File "/root/subsync/lib/python3.6/site-packages/ffsubsync/ffsubsync.py", line 117, in <listcomp> for scale_factor in framerate_ratios File "/root/subsync/lib/python3.6/site-packages/ffsubsync/sklearn_shim.py", line 212, in fit Xt, fit_params = self._fit(X, y, **fit_params) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/sklearn_shim.py", line 177, in _fit **fit_params_steps[name]) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/sklearn_shim.py", line 368, in _fit_transform_one res = transformer.fit_transform(X, y, **fit_params) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/sklearn_shim.py", line 40, in fit_transform return self.fit(X, **fit_params).transform(X) File "/root/subsync/lib/python3.6/site-packages/ffsubsync/subtitle_parser.py", line 107, in fit raise exc File "/root/subsync/lib/python3.6/site-packages/ffsubsync/subtitle_parser.py", line 96, in fit start_seconds=self.start_seconds), File "/root/subsync/lib/python3.6/site-packages/ffsubsync/subtitle_parser.py", line 44, in _preprocess_subs next_sub = GenericSubtitle.wrap_inner_subtitle(next(subs)) File "/root/subsync/lib/python3.6/site-packages/srt.py", line 362, in parse _raise_if_not_contiguous(srt, expected_start, len(srt)) File "/root/subsync/lib/python3.6/site-packages/srt.py", line 387, in _raise_if_not_contiguous raise SRTParseError(expected_start, actual_start, unmatched_content) srt.SRTParseError: Expected contiguous start of match or end of input at char 0, but started at char 60576

    Any idea how it can be fixed?

    bug 
    opened by sevospl 6
  • cchardet no longer updated

    cchardet no longer updated

    cchardet, a dependency of ffsubsync is no longer supported, and will not build correctly on Python 3.11 on Windows 10. See: https://github.com/PyYoshi/cChardet/issues/77

    charset_normalizer seems like a decent drop-in replacement others suggest

    bug 
    opened by vevv 5
  • Framerate was not specified and cannot be read from the MicroDVD file error when used on a .sub file

    Framerate was not specified and cannot be read from the MicroDVD file error when used on a .sub file

    Environment (please complete the following information):

    • OS: Windows 10 WSL Debian
    • python version 3.9.2
    • subsync version 0.4.20

    Describe the bug I get a Framerate was not specified and cannot be read from the MicroDVD file. error when trying to use ffsubsync on a .sub file, although I used --frame-rate FRAMERATE to specify the frame-rate.

    To Reproduce What I have tried:

    ffs --frame-rate 31250 "video.mkv" -i "unsync.sub" -o "sync.sub"
    ffs --frame-rate 31250 --skip-infer-framerate-ratio "video.mkv" -i "unsync.sub" -o "sync.sub"
    ffs --frame-rate 31250 "reference.srt" -i "unsync.sub" -o "sync.sub"
    
    

    Expected behavior Frame-rate should be read from the command line argument and the program should carry on.

    Output

    [13:16:58] INFO     extracting speech segments from reference 'video.mkv'...
               INFO     Checking video for subtitles stream...                                              speech_transformers.py:259
               INFO     Video file appears to lack subtitle stream                                          speech_transformers.py:264
    100%|██████████████████████████████████████████████████████████████████████████████████▉| 5460.0/5460.051 [01:01<00:00, 88.96it/s]
    [13:18:00] INFO     ...done                                                                                       ffsubsync.py:444
               INFO     extracting speech segments from subtitles file(s)                                             ffsubsync.py:134
                        ['unsync.sub']...
               INFO     detected encoding: WINDOWS-1252                                                          subtitle_parser.py:85
               ERROR    Framerate was not specified and cannot be read from the MicroDVD file.                        ffsubsync.py:223
    
    bug 
    opened by Nicryc 0
  • A better VAD than webrtc

    A better VAD than webrtc

    I saw this on hackernews it seems to be better at distinguishing noise and voice

    https://thegradient.pub/one-voice-detector-to-rule-them-all/

    https://github.com/snakers4/silero-vad

    opened by Dnkhatri 2
  • Is ffsubsync support auto adjust subtitle duration

    Is ffsubsync support auto adjust subtitle duration

    Subtitles from opensubtitle normally have correct texts, but their timestamps are ofhen messed up. Partially because aegisub used by most fansubbers is not user-friendly on timestamps adjustment.

    Is ffsubsync support auto crop/extend subtitle entry display duration base on data from speech detection?

    opened by cyzs233 1
  • output subtitles still out of sync

    output subtitles still out of sync

    Tarball, generated via debug option in Bazarr: S01E02.mp4.2022-01-14-16-26-23.tar.gz

    Additional context Attempted to sync this subtitles (in Bazarr), It seems strange to me why it didn't work because in the video at the start there's a "Previously on..." which is clearly out of sync with the subs which seem to me very easy to synchronize. Either way, good job with this library, I also has Spanish subs and they synchronized perfectly well!

    out-of-sync 
    opened by Tau5 0
Releases(0.4.22)
  • 0.4.22(Dec 31, 2022)

    This release includes miscellaneous sync improvements, compatibility fixes, and other bugfixes. It also includes functionality for using a new algorithm, called "golden section search", for inferring the framerate ratio between the reference and input subtitles. To use, pass the --gss flag -- it may help in some cases where subtitles are still out of sync, if the issue is that the scaling is different from the predefined ratios ffsubsync tries.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.11(Jan 30, 2021)

    • Misc sync improvements:
      • Have webrtcvad use '0' as the non speech label instead of 0.5;
      • Allow the vad non speech label to be specified via the --non-speech-label command line parameter;
      • Don't try to infer framerate ratio based on length between first and last speech frames for non-subtitle speech detection;
    Source code(tar.gz)
    Source code(zip)
  • 0.4.10(Jan 18, 2021)

    • Lots of improvements from PRs submitted by @alucryd (big thanks!):
      • Retain ASS styles;
      • Support syncing several subs against the same ref via --overwrite-input flag;
      • Add --apply-offset-seconds postprocess option to shift alignment by prespecified amount;
    • Filter out metadata in subtitles when extracting speech;
    • Add experimental --golden-section-search over framerate ratio (off by default);
    • Try to improve sync by inferring framerate ratio based on relative duration of synced vs unsynced;
    Source code(tar.gz)
    Source code(zip)
  • 0.4.9(Jan 18, 2021)

    • Make default offset seconds 60 and enforce during alignment as opposed to throwing away alignments with > max_offset_seconds;
    • Add experimental section for using golden section search to find framerate ratio;
    • Restore ability to read stdin and write stdout after buggy permissions check;
    • Exceptions that occur during syncing were mistakenly suppressed; this is now fixed;
    Source code(tar.gz)
    Source code(zip)
  • 0.4.8(Sep 23, 2020)

    • Allow MicroDVD input format;
    • Use output extension to determine output format;
    • Bugfix for writing subs to stdout;
    • Misc bugfixes and stability improvements;
    • Use webrtcvad-wheels on Windows to eliminate dependency on compiler (i.e., better Windows support);
    Source code(tar.gz)
    Source code(zip)
  • 0.4.4(Jun 8, 2020)

  • v0.3.4(Mar 20, 2020)

  • v0.3.2(Mar 13, 2020)

  • v0.3.1(Mar 12, 2020)

  • v0.3.0(Mar 12, 2020)

    • Better detection of text file encodings;
    • ASS / SSA functionality (but currently untested);
    • Allow serialize speech with --serialize-speech flag;
    • Convenient --make-test-case flag to create test cases when filing sync-related bugs;
    • Use utf-8 as default output encoding (instead of using same encoding as input);
    • More robust test framework (integration tests!);
    Source code(tar.gz)
    Source code(zip)
Owner
Stephen Macke
Stephen Macke
OpenShot Video Editor is an award-winning free and open-source video editor for Linux, Mac, and Windows, and is dedicated to delivering high quality video editing and animation solutions to the world.

OpenShot Video Editor is an award-winning free and open-source video editor for Linux, Mac, and Windows, and is dedicated to delivering high quality v

OpenShot Studios, LLC 3.1k Jan 1, 2023
video streaming userbot (vsu) based on pytgcalls for streaming video trought the telegram video chat group.

VIDEO STREAM USERBOT ✨ an another telegram userbot for streaming video trought the telegram video chat. Environmental Variables ?? API_ID : Get this v

levina 6 Oct 17, 2021
Streamlink is a CLI utility which pipes video streams from various services into a video player

Streamlink is a CLI utility which pipes video streams from various services into a video player

null 8.2k Dec 26, 2022
MoviePy is a Python library for video editing, can read and write all the most common audio and video formats

MoviePy is a Python library for video editing: cutting, concatenations, title insertions, video compositing (a.k.a. non-linear editing), video processing, and creation of custom effects. See the gallery for some examples of use.

null 10k Jan 8, 2023
A youtube video link or id to video thumbnail python package.

Youtube-Video-Thumbnail A youtube video link or id to video thumbnail python package. Made with Python3

Fayas Noushad 10 Oct 21, 2022
Text2Video's purpose is to help people create videos quickly and easily by simply typing out the video’s script and a description of images to include in the video.

Text2Video Text2Video's purpose is to help people create videos quickly and easily by simply typing out the video’s script and a description of images

Josh Chen 19 Nov 22, 2022
Filtering user-generated video content(SberZvukTechDays)Filtering user-generated video content(SberZvukTechDays)

Filtering user-generated video content(SberZvukTechDays) Table of contents General info Team members Technologies Setup Result General info This is a

Roman 6 Apr 6, 2022
Extracting frames from video and create video using frames

Extracting frames from video and create video using frames This program uses opencv library to extract the frames from video and create video from ext

null 1 Nov 19, 2021
Telegram Video Chat Video Streaming bot 🇱🇰

?? Get SESSION_NAME from below: Pyrogram ?? Preview ✨ Features Music & Video stream support MultiChat support Playlist & Queue support Skip, Pause, Re

DOOZY YEZ 5 Jun 26, 2022
Play Video & Music on Telegram Group Video Chat

?? DEMONGIRL ?? ʜᴇʟʟᴏ ❤️ ???? Join us ᴠɪᴅᴇᴏ sᴛʀᴇᴀᴍ ɪs ᴀɴ ᴀᴅᴠᴀɴᴄᴇᴅ ᴛᴇʟᴇʀᴀᴍ ʙᴏᴛ ᴛʜᴀᴛ's ᴀʟʟᴏᴡ ʏᴏᴜ ᴛᴏ ᴘʟᴀʏ ᴠɪᴅᴇᴏ & ᴍᴜsɪᴄ ᴏɴ ᴛᴇʟᴇɢʀᴀᴍ ɢʀᴏᴜᴘ ᴠɪᴅᴇᴏ ᴄʜᴀᴛ ?? ɢ

Jonathan 5 Dec 31, 2021
Takes a video as an input and creates a video which is suitable to upload on Youtube Shorts and Tik Tok (1080x1920 resolution).

Shorts-Tik-Tok-Creator Takes a video as an input and creates a video which is suitable to upload on Youtube Shorts and Tik Tok (1080x1920 resolution).

Arber Hakaj 5 Nov 9, 2022
Turn any live video stream or locally stored video into a dataset of interesting samples for ML training, or any other type of analysis.

Sieve Video Data Collection Example Find samples that are interesting within hours of raw video, for free and completely automatically using Sieve API

Sieve 72 Aug 1, 2022
Video-to-GIF-Converter - A small code snippet that can be used to convert any video to a gif

Video to GIF Converter Project Description: This is a small code snippet that ca

Hassan Shahzad 3 Jun 22, 2022
Video-stream - A telegram video stream bot repo

This is a Telegram Video stream Bot. Binary Tech ?? Features stream videos downl

silentz lk 1 Feb 2, 2022
Terminal-Video-Player - A program that can display video in the terminal using ascii characters

Terminal-Video-Player - A program that can display video in the terminal using ascii characters

null 15 Nov 10, 2022
Video processing routines for SciPy

scikit-video Video Processing SciKit BETA Video processing algorithms, including I/O, quality metrics, temporal filtering, motion/object detection, mo

Alex Izvorski 119 Dec 27, 2022
High-performance cross-platform Video Processing Python framework powerpacked with unique trailblazing features :fire:

Releases | Gears | Documentation | Installation | License VidGear is a High-Performance Video Processing Python Library that provides an easy-to-use,

Abhishek Thakur 2.6k Dec 28, 2022
Boltstream Live Video Streaming Website + Backend

Boltstream Self-hosted Live Video Streaming Website + Backend Reference

Ben Wilber 1.7k Dec 28, 2022
Real-time video and audio streams over the network, with Streamlit.

streamlit-webrtc Example You can try out the sample app using the following commands.

Yuichiro Tachibana (Tsuchiya) 648 Jan 1, 2023