Cross-platform command-line AV1 / VP9 / HEVC / H264 encoding framework with per scene quality encoding

Overview


Av1an

A cross-platform framework to streamline encoding

alt text

Discord server

Easy, Fast, Efficient and Feature Rich

An easy way to start using AV1 / HEVC / H264 / VP9 / VP8 encoding. AOM, RAV1E, SVT-AV1, SVT-VP9, VPX, x265, x264 are supported.

Example with default parameters:

av1an -i input

With your own parameters:

av1an -i input -v " --cpu-used=3 --end-usage=q --cq-level=30 --threads=8 " -w 10
--target-quality 95 -a "-c:a libopus -ac 2 -b:a 192k" -log my_log -o output

Usage

-i  --input             Input file(s), or Vapoursynth (.py,.vpy) script
                        (relative or absolute path)

-o  --output-file       Name/Path for output file (Default: (input file name)_(encoder).mkv)
                        Output is `mkv` by default
                        Ouput extension can be set to: `mkv`, `webm`, `mp4`

-e  --encoder           Encoder to use
                        [default: aom] [possible values: aom, rav1e, vpx, svt-av1, x264, x265]

-v  --video-params      Encoder settings flags (If not set, will be used default parameters.)
                        Must be inside ' ' or " "

-p  --passes            Set number of passes for encoding
                        (Default: AOMENC: 2, rav1e: 1, SVT-AV1: 1, SVT-VP9: 1,
                        VPX: 2, x265: 1, x264: 1)

-w  --workers           Override number of workers.

-r  --resume            Resumes encoding.

--keep                  Doesn't delete temporary folders after encode has finished.

-q  --quiet             Do not print a progress bar to the terminal.

-l  --logging           Path to .log file(By default created in temp folder)

--temp                  Set path for the temporary folder. [default: .hash]

-c  --concat            Concatenation method to use for splits Default: ffmpeg
                        [possible values: ffmpeg, mkvmerge, ivf]

FFmpeg options

-a  --audio-params      FFmpeg audio settings (Default: copy audio from source to output)
                        Example: -a '-c:a libopus -b:a  64k'

-f  --ffmpeg            FFmpeg options video options.
                        Applied to each encoding segment individually.
                        (Warning: Cropping doesn't work with Target VMAF mode
                        without specifying it in --vmaf-filter)
                        Example:
                        --ff " -vf scale=320:240 "

--pix-format            Setting custom pixel/bit format for piping
                        (Default: 'yuv420p10le')

Segmenting

--split-method          Method used for generating splits.(Default: av-scenedetect)
                        Options: `av-scenedetect`, `none`
                        `none` -  skips scenedetection.

-m  --chunk-method      Determine the method in which chunks are made for encoding.
                        By default the best method is selected automatically.
                        [possible values: segment, select, ffms2, lsmash, hybrid]

-s  --scenes            File to save/read scenes.

-x  --extra-split       Size of chunk after which it will be split [default: 240]

--min-scene-len         Specifies the minimum number of frames in each split.

Target Quality

--target-quality        Quality value to target.
                        VMAF used as substructure for algorithms.
                        When using this mode, you must use quantizer/quality modes of enocoder.

--target-quality-method Type of algorithm for use.
                        Options: per_shot

--min-q, --max-q        Min,Max Q values limits
                        If not set by the user, the default for encoder range will be used.

--vmaf                  Calculate VMAF after encoding is done and make a plot.

--vmaf-path             Custom path to libvmaf models.
                        example: --vmaf-path "vmaf_v0.6.1.pkl"
                        Recommended to place both files in encoding folder
                        (`vmaf_v0.6.1.pkl` and `vmaf_v0.6.1.pkl.model`)
                        (Required if VMAF calculation doesn't work by default)

--vmaf-res              Resolution for VMAF calculation.
                        [default: 1920x1080]

--probes                Number of probes for target quality. [default: 4]

--probe-slow            Use probided video encoding parameters for vmaf probes.

--vmaf-filter           Filter used for VMAF calculation. The passed format is filter_complex.
                        So if crop filter used ` -ff " -vf crop=200:1000:0:0 "`
                        `--vmaf-filter` must be : ` --vmaf-filter "crop=200:1000:0:0"`

--probing-rate          Setting rate for VMAF probes. Using every N frame used in probe.
                        [default: 4]

--vmaf-threads          Limit number of threads that are used for VMAF calculation

Main Features

Splitting video by scenes for parallel encoding because AV1 encoders are currently not very good at multithreading and encoding is limited to a very limited number of threads.

  • Vapoursynth script input support.
  • Speed up video encoding.
  • Target Quality mode. Targeting end result reference visual quality. VMAF used as a substructure
  • Resuming encoding without loss of encoded progress.
  • Simple and clean console look.
  • Automatic detection of the number of workers the host can handle.
  • Both video and audio transcoding.

Install

Docker

Av1an can be run in a Docker container with the following command if you are in the current directory Linux

docker run --privileged -v "$(pwd):/videos" --user $(id -u):$(id -g) -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

Windows

docker run --privileged -v "${PWD}:/videos" -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

Docker can also be built by using

docker build -t "av1an" .

To specify a different directory to use you would replace $(pwd) with the directory

docker run --privileged -v "/c/Users/masterofzen/Videos":/videos --user $(id -u):$(id -g) -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

The --user flag is required on linux to avoid permission issues with the docker container not being able to write to the location, if you get permission issues ensure your user has access to the folder that you are using to encode.

Docker tags

The docker image has the following tags

Tag Description
latest Contains the latest stable av1an version release
master Contains the latest av1an commit to the master branch
sha-##### Contains the commit of the hash that is referenced
#.## Stable av1an version release

Support the developer

Bitcoin - 1GTRkvV4KdSaRyFDYTpZckPKQCoWbWkJV1

Issues
  • freetype

    freetype

    Looks like freetype is required, but this is not mentioned in the readme :)

    opened by dprestegard 30
  • Use a different method for ffmpeg frame count

    Use a different method for ffmpeg frame count

    This method uses ffprobe to count the number of packets (which is identical to the number of frames, but faster) in a video stream. This works with more video formats, including with --enable-keyframe-filtering=2 in aomenc. Performance should be similar or better than ffmpeg -copy.

    Fixes #367

    opened by shssoichiro 26
  • Video chunks are assigned index in wrong order leading to out-of-order video parts

    Video chunks are assigned index in wrong order leading to out-of-order video parts

    Using the newest git master (06fb8c9eddd1ca9c86b16dac7bde1ef33e15de31) using ffmpeg to concat (the default) leads to a wrong order in the video. This of course leads to video async. Tested multiple source files, problem always appears, even with small files (~30s). Not a big difference, seems to be only 1 or 2 chunks at a time; I think one of the first gets sorted to the end.

    Using mkvmerge fixed it for the small files at least, restarting my 3h encode now to test if this fixed it aswell.

    opened by mxsrm 23
  • Chunk: 1Fatal: Specify stream dimensions with --width (-w)  and --height (-h)

    Chunk: 1Fatal: Specify stream dimensions with --width (-w) and --height (-h)

    I cannot get any type of encode to work, because it either gives an error like "specify stream dimensions", or it suddenly stops the encode some point before finishing. I tried adding the --width and --height arguments through the -v parameter, and even though the behavior is different, there is still an issue where it stops encoding.

    Command: av1an -i input.mkv -enc aom -v "--cpu-used=3 --end-usage=q --cq-level=3 --width 1920 --height 1080" --target_quality 95 --min_q 20 --max_q 60 -o output.mkv

    However, the command also fails with just av1an -i input.mkv

    It seems like the first pass works, but there's always some type of issue for me with the second pass.

    Here's the command line output for the above command:

    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████▊| 722/723 [00:05<00:00, 136.21frames/s]
    Queue: 6 Workers: 6 Passes: 2
    Params: --cpu-used=3 --end-usage=q --cq-level=3 --width 1920 --height 1080
      0%|                                                                                                                         | 0/723 [00:00<?, ?fr/s]:: Encoder encountered an error: 1
    :: Chunk: 5Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #5 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 0Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #0 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 2Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #2 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 4Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #4 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 5Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #5 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 5Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #5 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    ::FATAL::
    ::Chunk #5 failed more than 3 times, shutting down thread
    
    
    :: Encoder encountered an error: 1
    :: Chunk: 1Fatal: Specify stream dimensions with --width (-w)  and --height (-h)
    :: Chunk #1 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing pipe
    :: Restarting chunk
    
    :: Encoder encountered an error: 1
    :: Chunk: 1
    Error: option width requires argument.
    Usage: aomenc <options> -o dst_filename src_filename
    Use --help to see the full list of options.
    :: Chunk #1 crashed with:
    :: Exception: <class 'Exception'>
     Error in processing encoding pipe
    :: Restarting chunk
    

    In another issue I read that it might work if you first encode a lossless version (with x264 crf 0) of the input and use that as the input to av1an, but I experience the same behavior either way.

    opened by redzic 22
  • Frames go missing

    Frames go missing

    Hi,

    I have noticed that some frames vanish in the encoding process. The source file and the split files had the exact same frame count, as tested with ffmpeg
    After the encode a little over 50 frames had vanished, also causing a 2 second audio desync towards the end In which step do these frames go missing, because aomenc never dropped frames for me, and obviously the splitting did neither. Could you please examine the situation?

    Thank you

    opened by utack 18
  • Encoding failed (using default settings)

    Encoding failed (using default settings)

    I'm trying a very simple execution to get familiar with this tool, but it fails to produce a valid output. This is on Windows, running in PowerShell, using Python 3.8.2.

    python C:\av1an\av1an.py -i .\ToS-4k-1920.mov -s scenes.csv -o defaults.mkv

    100%|███████████████████████████████████████████████████████████████████████| 17620/17620 [01:43<00:00, 170.12frames/s]
    Queue: 119 Workers: 8 Passes: 2
    Params: --threads=4 --cpu-used=6 --end-usage=q --cq-level=40
      0%|                                                                                        | 0/17620 [00:00<?, ?fr/s]Encoding failed, check validity of your encoding settings/commands and start again
    Encoding failed, check validity of your encoding settings/commands and start again
    Encoding failed, check validity of your encoding settings/commands and start again
    Encoding failed, check validity of your encoding settings/commands and start again
    Encoding failed, check validity of your encoding settings/commands and start again
      3%|██                                                                          | 466/17620 [02:41<7:22:17,  1.55s/fr]
    

    After blowing up like this, it does continue to encode (I have 8 aomenc processes running).

    What info can I provide to help investigate these errors?

    opened by dprestegard 16
  • Surprisingly slow behaviour for x265

    Surprisingly slow behaviour for x265

    If I use x265 via ffmpeg on the same present and qp, I can easily get 120fps transcodes, but with av1an, even with 11 workers, I only get 30 or so. I suspect this is due to overly-aggressive scene splitting (I end up with 136 scenes for about 180 minutes of footage). Is there a way I can determine what causes this?

    opened by kozross 14
  • --cpu-used doesn't change file quality, but does change encoding time

    --cpu-used doesn't change file quality, but does change encoding time

    How I encoded the files (I'm using fish as my shell):

    for c in (seq 0 8)
        mkdir "$c-8"
        av1an -i dota2-10.y4m -o "$c-8/$c-8.mkv" --vmaf -fmt yuv420p -v "--cpu-used=$c --end-usage=q --cq-level=20 --threa
    ds=4" -w 8 -log "$c-8/$c-8.log"
    end
    

    --cpu-used 8 and --cpu-used 0 are the same file, they only differ in the header, only in 42 bytes, but one takes 3 hours to encode, the other only 5 minutes.

    I encoded first 10 seconds of "DOTA2" from https://media.xiph.org/video/derf/

    ~ $ av1an --version
    Av1an version: 5.5-3
    ~ $ pikaur -Qo aomenc
    /usr/bin/aomenc is owned by aom 2.0.1-1
    

    I installed av1an from https://aur.archlinux.org/packages/python-av1an/

    opened by TheHardew 14
  • Av1an generates 2 (or more) extra frames on sources.

    Av1an generates 2 (or more) extra frames on sources.

    I've seen this only on Geforce Experience generated videos. ConEmu64_Mvw5tXooGa

    opened by RozeFound 14
  • [Feature Request] Vapoursynth Input Support

    [Feature Request] Vapoursynth Input Support

    A lot of the pieces to get this working are already there, with the biggest obstacle being PySceneDetect support (although I have an idea to solve that one that I want to try out). An obvious workaround would be to just use aomenc keyframes instead, but I much prefer the speed of PySceneDetect.

    There's a good chance that I'll implement this myself, as I'm working in a similar space already, but I figured I should open this issue to track support, and collect any pertinent conversation.

    opened by adworacz 13
  • ffmpeg::Error Stream not found

    ffmpeg::Error Stream not found

    I'm trying to convert the Sintel open movie to AV1 using av1an default settings.

    I am getting the following output for tag 0.2.0:

    Scene detection
      [00:02:10] [##################################################################] 100% 21312/21312 (162.78 fps, eta 0s)
    Queue: 138 Workers: 3 Passes: 2
    Params: --threads=8 --cpu-used=6 --end-usage=q --cq-level=30 --tile-columns=2 --tile-rows=1 --kf-max-dist=240 --kf-min-dist=12
    ⠋ [00:01:23] [#>-------------------------------------------------------------------]   2% 409/21312 (4.84 fps, eta 72m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)', av1an-core/src/broker.rs:128:69
    ⠒ [00:02:27] [##>------------------------------------------------------------------]   3% 682/21312 (4.58 fps, eta 75m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)⠠ [00:02:30] [##>------------------------------------------------------------------]   3% 713/21312 (4.72 fps, eta 73m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)', av1an-core/src/broker.rs:128:69
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Any { .. }', av1an-core/src/broker.rs:53:35
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any { .. }', av1an-core/src/settings.rs:868:21
    

    The same happens for the most recent commit https://github.com/master-of-zen/Av1an/commit/57cbb2930664d26fdd9b8b6624e2190ee07a2f20

    Scene detection
      00:02:10 [###################################################################] 100%  21312/21312 (163.14 fps, eta 0s)
    Queue 138 Workers 3 Passes 2
    Params: --threads=8 --cpu-used=6 --end-usage=q --cq-level=30 --tile-columns=2 --tile-rows=1 --kf-max-dist=240 --kf-min-dist=12
    ⠐ 00:01:17 [#>--------------------------------------------------------------------]   2%  407/21312 (5.19 fps, eta 67m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)', av1an-core/src/broker.rs:206:70
    ⠁ 00:02:20 [##>-------------------------------------------------------------------]   3%  687/21312 (4.88 fps, eta 70m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)⠐ 00:02:22 [##>-------------------------------------------------------------------]   3%  711/21312 (4.97 fps, eta 69m)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ffmpeg::Error(1381258232: Stream not found)', av1an-core/src/broker.rs:206:70
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Any { .. }', av1an-core/src/broker.rs:161:35
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any { .. }', av1an-core/src/settings.rs:944:21
    
    opened by niklaskorz 5
  • Daemonized Av1an

    Daemonized Av1an

    Av1an running as a daemon with a client sending it new videos to chew through could be pretty pog.

    You would have two components: the Client that would send the parameters to the Daemon and the Daemon that would perform scene detection on its inputs and add the chunks to its ever growing encoding Queue.

    This way we could process as many videos as possible and as quickly as possible.

    This would technically fix #220 and we don't have to bloat Av1an with some internal scripting language and whatever and just use other more suitable tools for the job.

    opened by lastrosade 2
  • Disable -c mkvmerge on Windows, or work around command length limitations

    Disable -c mkvmerge on Windows, or work around command length limitations

    From what I've gathered, mkvmerge will not work properly if there's more than 739 chunks as the command length exceeds 8192 chars.

    This means on Windows there needs to be a workaround for this, especially when using --enable-keyframe-filtering=2 with aomenc as this breaks ffmpeg muxing, so mkvmerge is the only way to mux those.

    opened by n00mkrad 3
  • [0.2.1-2] aom encoder not working (

    [0.2.1-2] aom encoder not working ("Specify stream dimensions")

    I'm using the latest release from arch community repository (av1an 0.2.1-2). When trying to encoding anything in aom I get this error:

    Scene detection
      [00:00:50] [########################################################################################] 100% 7221/7221 (141.64 fps, eta 0s)
    Queue: 48 Workers: 8 Passes: 2
    Params: --threads=8 --cpu-used=6 --end-usage=q --cq-level=30 --tile-columns=2 --tile-rows=1 --kf-max-dist=240 --kf-min-dist=12
    ⠤ [00:00:00] [---------------------------------------------------------------------------------------------]   0% 0/7221 (0.00 fps, eta 0s)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 41: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 15: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 44: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    thread 'thread '<unnamed><unnamed>' panicked at '' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 46: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 45: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', ', av1an-core/src/target_quality.rsav1an-core/src/target_quality.rs::217217::4545
    
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 42: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    ⠦ [00:00:00] [---------------------------------------------------------------------------------------------]   0% 0/7221 (0.00 fps, eta 0s)
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 43: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "Encoder encountered an error on chunk 40: Output { status: ExitStatus(ExitStatus(256)), stdout: \"\", stderr: \"Fatal: Specify stream dimensions with --width (-w)  and --height (-h)\\n\" }"', av1an-core/src/target_quality.rs:217:45
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any { .. }', av1an-core/src/settings.rs:868:21
    

    Command used:

    av1an --chunk-method lsmash --encoder aom --target-quality 96 --min-q 8 --max-q 35 -i "ingl.mkv"

    Specifiying video params with width and height does not fix anything.

    The file to be encoded:

    Filename:               ingl.mkv
    File size:              1160424704 (1.17GB, 1.09GiB)
    Container format:       Matroska (MKV)
    Duration:               00:05:01.18
    Pixel dimensions:       1920x1080
    Sample aspect ratio:    1:1
    Display aspect ratio:   16:9
    Scan type:              Progressive scan
    Frame rate:             23.98 fps
    Bit rate:               30824 kb/s
    Streams:
        #0: Video, H.264 (High Profile level 4.1), yuv420p (tv, bt709), 1920x1080 (SAR 1:1, DAR 16:9), 23.98 fps
        #1: Audio (eng), DCA (DTS Coherent Acoustics), 48000 Hz, 5.1(side)
    

    Edit: aomenc version av1 - AOMedia Project AV1 Encoder v3.2.0 (default)

    opened by mxsrm 7
  • Error compiling for `ffmpeg-sys-next v4.4.0`, Package libavutil was not found in the pkg-config search path

    Error compiling for `ffmpeg-sys-next v4.4.0`, Package libavutil was not found in the pkg-config search path

    Compile from scratch, not on top of existing files

    D:\Video\av1>cargo build --release Compiling autocfg v1.0.1 Compiling libc v0.2.105 Compiling proc-macro2 v1.0.32 Compiling unicode-xid v0.2.2 Compiling winapi v0.3.9 Compiling syn v1.0.81 Compiling memchr v2.4.1 Compiling jobserver v0.1.24 Compiling version_check v0.9.3 Compiling cfg-if v1.0.0 Compiling crossbeam-utils v0.8.5 Compiling lazy_static v1.4.0 Compiling crossbeam-epoch v0.9.5 Compiling pkg-config v0.3.22 Compiling bitflags v1.3.2 Compiling log v0.4.14 Compiling regex-syntax v0.6.25 Compiling rayon-core v1.9.1 Compiling vcpkg v0.2.15 Compiling rustversion v1.0.5 Compiling semver v1.0.4 Compiling glob v0.3.0 Compiling scopeguard v1.1.0 Compiling serde_derive v1.0.130 Compiling ryu v1.0.5 Compiling bindgen v0.54.0 Compiling anyhow v1.0.44 Compiling tinyvec_macros v0.1.0 Compiling either v1.6.1 Compiling cfg-if v0.1.10 Compiling lazycell v1.3.0 Compiling matches v0.1.9 Compiling peeking_take_while v0.1.2 Compiling shlex v0.1.1 Compiling serde v1.0.130 Compiling rustc-hash v1.1.0 Compiling radium v0.5.3 Compiling unicode-bidi v0.3.7 Compiling unicode-width v0.1.9 Compiling unicode-segmentation v1.8.0 Compiling ntapi v0.3.6 Compiling percent-encoding v2.1.0 Compiling bytes v1.1.0 Compiling lexical-core v0.7.6 Compiling failure_derive v0.1.8 Compiling adler v1.0.2 Compiling gimli v0.25.0 Compiling rust_hawktracer_proc_macro v0.4.1 Compiling once_cell v1.8.0 Compiling rust_hawktracer_normal_macro v0.4.1 Compiling funty v1.1.0 Compiling static_assertions v1.1.0 Compiling vec_map v0.8.2 Compiling arrayvec v0.5.2 Compiling tap v1.0.1 Compiling strsim v0.8.0 Compiling noop_proc_macro v0.3.0 Compiling serde_json v1.0.68 Compiling byte-slice-cast v1.2.0 Compiling vapoursynth-sys v0.3.0 Compiling wyz v0.2.0 Compiling rustc-demangle v0.1.21 Compiling paste v1.0.5 Compiling plotters-backend v0.3.2 Compiling sysinfo v0.20.5 Compiling bitstream-io v1.2.0 Compiling encode_unicode v0.3.6 Compiling itoa v0.4.8 Compiling ffmpeg-next v4.4.0 Compiling arrayvec v0.7.2 Compiling y4m v0.7.0 Compiling smawk v0.3.1 Compiling pin-project-lite v0.2.7 Compiling std_prelude v0.2.12 Compiling number_prefix v0.4.0 Compiling splines v4.0.3 Compiling strsim v0.10.0 Compiling shlex v1.1.0 Compiling num-traits v0.2.14 Compiling num-integer v0.1.44 Compiling memoffset v0.6.4 Compiling rayon v1.5.1 Compiling num-bigint v0.3.3 Compiling num-rational v0.3.2 Compiling miniz_oxide v0.4.4 Compiling tokio v1.13.0 Compiling cc v1.0.71 Compiling proc-macro-error-attr v1.0.4 Compiling proc-macro-error v1.0.4 Compiling nom v5.1.2 Compiling nom v6.1.2 Compiling clang-sys v0.29.3 Compiling tinyvec v1.5.0 Compiling itertools v0.10.1 Compiling textwrap v0.11.0 Compiling heck v0.3.3 Compiling form_urlencoded v1.0.1 Compiling rust_hawktracer v0.7.0 Compiling addr2line v0.16.0 Compiling plotters-svg v0.3.1 Compiling unicode-normalization v0.1.19 Compiling libloading v0.5.2 Compiling libz-sys v1.1.3 Compiling libgit2-sys v0.12.24+1.3.0 Compiling backtrace v0.3.62 Compiling num_cpus v1.13.0 Compiling quote v1.0.10 Compiling which v4.2.2 Compiling crossbeam-channel v0.5.1 Compiling aho-corasick v0.7.18 Compiling object v0.27.1 Compiling rustc_version v0.4.0 Compiling err-derive v0.2.4 Compiling atty v0.2.14 Compiling time v0.1.44 Compiling miow v0.3.7 Compiling terminal_size v0.1.17 Compiling ansi_term v0.12.1 Compiling ctrlc v3.2.1 Compiling bitvec v0.19.5 Compiling idna v0.2.3 Compiling simd_helpers v0.1.0 Compiling plotters v0.3.1 Compiling regex v1.5.4 Compiling cexpr v0.4.0 Compiling clap v2.33.3 Compiling console v0.15.0 Compiling mio v0.7.14 Compiling crossbeam-deque v0.8.1 Compiling url v2.2.2 Compiling chrono v0.4.19 Compiling stfu8 v0.2.4 Compiling synstructure v0.12.6 Compiling unicode-linebreak v0.1.2 Compiling indicatif v0.17.0-beta.1 Compiling thiserror-impl v1.0.30 Compiling num-derive v0.3.3 Compiling enum-iterator-derive v0.7.0 Compiling arg_enum_proc_macro v0.3.1 Compiling strum_macros v0.22.0 Compiling vergen v3.0.4 (https://github.com/xiph/rav1e#9417a4df) Compiling vergen v5.1.16 Compiling getset v0.1.1 Compiling structopt-derive v0.4.18 Compiling thiserror v1.0.30 Compiling enum-iterator v0.7.0 Compiling strum v0.22.0 Compiling structopt v0.3.25 Compiling failure v0.1.8 Compiling av-data v0.3.0 Compiling flexi_logger v0.19.5 Compiling av-bitstream v0.1.2 Compiling nasm-rs v0.2.1 Compiling v_frame v0.2.4 (https://github.com/xiph/rav1e#9417a4df) Compiling vapoursynth v0.3.0 Compiling av-format v0.3.1 Compiling path_abs v0.5.1 Compiling dashmap v4.0.2 Compiling textwrap v0.14.2 Compiling rav1e v0.5.0-beta.2 (https://github.com/xiph/rav1e#9417a4df) Compiling av-ivf v0.2.2 Compiling ffmpeg-sys-next v4.4.0 error: failed to run custom build command for ffmpeg-sys-next v4.4.0

    Caused by: process didn't exit successfully: D:\Video\av1\target\release\build\ffmpeg-sys-next-b583d6f23b6008db\build-script-build (exit code: 101) --- stdout Could not find ffmpeg with vcpkg: Could not look up details of packages in vcpkg tree could not read status file updates dir: The system cannot find the path specified. (os error 3) cargo:rerun-if-env-changed=LIBAVUTIL_NO_PKG_CONFIG cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-pc-windows-msvc cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_pc_windows_msvc cargo:rerun-if-env-changed=HOST_PKG_CONFIG cargo:rerun-if-env-changed=PKG_CONFIG cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-pc-windows-msvc cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_pc_windows_msvc cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH cargo:rerun-if-env-changed=PKG_CONFIG_PATH cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-pc-windows-msvc cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_pc_windows_msvc cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-pc-windows-msvc cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_pc_windows_msvc cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR

    --- stderr thread 'main' panicked at 'called Result::unwrap() on an Err value: "pkg-config" "--libs" "--cflags" "libavutil" did not exit successfully: exit code: 1 --- stderr Package libavutil was not found in the pkg-config search path. Perhaps you should add the directory containing libavutil.pc' to the PKG_CONFIG_PATH environment variable No package 'libavutil' found ', C:\Users\Ahmad\.cargo\registry\src\github.com-1ecc6299db9ec823\ffmpeg-sys-next-4.4.0\build.rs:701:14 note: run withRUST_BACKTRACE=1` environment variable to display a backtrace warning: build failed, waiting for other jobs to finish... error: build failed

    opened by ahmadrr 2
  • Encoding on networked slaves?

    Encoding on networked slaves?

    Hi, I just came across this, great work!! Did you think about adding support for encoding on networked slaves? Encoding on one docker container takes a long time, it would speed up things a lot if we were able to add more containers/pods to the cpu pool :)

    opened by BurnerGR 2
  • Print progress when running without a terminal window

    Print progress when running without a terminal window

    Is it possible to print the progress (encoded frame count) when av1an is running without a graphical shell?

    Right now, making a wrapper is tricky because it's not possible to read av1an's progress, apart from the amount of finished chunks which can be read from the log file.

    ffmpeg, for example, does print the progress no matter the shell.

    opened by n00mkrad 6
  • [Feature Request][aom] av1an to control the denoising and noise table generation process

    [Feature Request][aom] av1an to control the denoising and noise table generation process

    aomenc's denoising sucks so I had the idea that av1an could optionally take ffmpeg denoising filter parameters. Currently the workflow is split the video into chunks, queue up and instantiate workers: each worker runs probes and then encodes the chunk. The optional new workflow would be split the video into chunks, queue up and instantiate workers: each worker will then run ffmpeg to denoise the chunk according to manual parameters, noise-model generates a grain table with the original and denoised chunk, worker encodes the chunk. Maybe add a toggle for whether the original or denoised chunk is encoded.

    Then we can combine av1an's chunked vmaf-q probing/encoding with non-crappy denoising for the best of both worlds.

    A simpler solution is for the user to generate a noise table for the entire source video and pass in whatever original or denoised input they want to be encoded and av1an would split the noise table file for the correct chunks, but the table generation seems to be completely single-threaded so would really benefit from being chunked...

    I don't know any Rust but I've poked through the source code so may take a stab at a PR in the future but not for a while. Open questions include whether to apply the other video filters at the same time as denoising for the most accurate modelling, but this would also mean they would have to be run on the source file too resulting in large yuv intermediates, Although it wouldn't be so bad since the number of active workers is not infinite.

    opened by clidx 2
  • [REQ] xvc codec support

    [REQ] xvc codec support

    Just discovered this interesting codec "approach":

    The xvc codec is primarily based on compression tools that originates from MPEG standards such as AVC/H.264 and HEVC, but it also includes technology that goes beyond those standards, enabling a high level of compression performance at reasonable encoding and decoding complexity.

    Official website: https://xvc.io/ Git: https://github.com/divideon/xvc

    Hope that inspires !

    opened by forart 0
  • Frame count mismatch errors

    Frame count mismatch errors

    When using a filter like "fps" in ffmpeg, av1an fails to concatenate or to count frames correctly.

    opened by lastrosade 0
Releases(0.2.0)
  • 0.2.0(Nov 1, 2021)

    Rust rewrite

    The first and most important change is that Av1an was completely rewritten in Rust, which improved the stability, performance, and maintainability of the project, which allows us to leverage tools and features that would simply not be possible with Python. Imagine not having syntax\typing errors on runtime :exploding_head:

    Scene detection methods

    Now, Av1an uses parts of the Rav1e encoder for scene change detection.

    To sum it up

    Lots of QOL improvements all around: faster encoding, static versions of av1an, faster target-quality, better and faster scene-detection, lower RAM usage, easier build process, etc.

    There is just too much to cover :D

    How is that possible?

    The development, growth, and success of Av1an couldn't be possible without contributors that care and develop Av1an :kissing_heart: :heart: Those people are in descending order, but not significance:

    @BlueSwordM @redzic @shssoichiro @luigi311 @n9Mtq4 @mxsrm @ishitatsuyuki @natis1 @Nestorfish @nathanielcwm

    Note

    Av1an now requires Vapoursynth Release Page

    Source code(tar.gz)
    Source code(zip)
    av1an.zip(17.34 MB)
  • 1.12(Jul 24, 2020)

    VVC support

    NOTE: VVC is not yet finalized, and at the stage of test model so it can crash/don't work/behave strangely.

    • Added experimental VVC support
    • Temp YUV files created at the start of encoding for the segment, and removed after.
    • Concatenation to VVC bitstream is done by parcatStatic from VVC repository.

    Usage

    The encoding requires compiled encoder, bitstream concatenator, config_file from the VVC repository.

    • EncoderAppStatic and parcatStatic must be named vvc_encoder, vvc_concat, and placed in the directory from where they can be reachable, the same directory or somewhere in PATH. Place the chosen config file for the encoder in encoding folder.

    • Encoder set to VVC by -enc vvc

    • Config file passed to Av1an by --vvc_conf CONFIG_FILE

    • Required video encoding parameters: -wdt X - video width -hgt X - video height -fr X - framerate -q X - quantizer Example: -v " -wdt 640 -hgt 360 -fr 23.98 -q 30 "

    • After encode is done output file with extension .h266 will created It can be decoded to .yuv with VVC compliled decoder : DecoderAppStatic -d 8 -b encoded.h266 -o output.yuv Keep in mind that encoding time is excessive

    Target VMAF for Rav1e, SVT-AV1, VPX

    • min_cq max_cq changed to min_q ,max_q
    • Default min_q max_q are set based on encoder.
    • Encoders must be in mode that requires setting quantizer. (--crf, --cq-level, --quantizer etc)

    Examples:

    Changed default passes for encoders

    svt_av1: 1 
    rav1e:   1
    aom:    2
    vpx:     2
    x265:    1 
    vvc:    1
    

    Also, default encoding settings changed accordingly

    Source code(tar.gz)
    Source code(zip)
  • 1.11(Jul 22, 2020)

    There has been ~300 commits after the last release. Lots of changes, let's keep it short.

    x265 + Target_VMAF

    • Added x265
    • Added target_vmaf support for x265 Usage: -enc x265 for encoder, --vmaf_target NUM as usual

    Better target_vmaf

    • Instead of default mean for VMAF, 25 percentile is used instead. Making the worst parts of a scene weight more in the calculation, resulting in better results and more consistent quality.
    • Added early skips if extreme Q values beyond the range are more than enough or not enough to reach target VMAF. example: In log file with --vmaf_target 90:

    Dynamic search for target Q value for the target_vmaf

    OLD evenly spaced probes(blue x probe):

    NEW dynamic search(green pentagon probe):

    5 probes now enough to cover over extreme ranges

    New splitting method: aom_keyframes

    Usage: --split_method aom_keyframes

    aom_keyframes use the first pass of aomenc for determining where keyframes will be placed by the encoder, and using this information for splitting, resulting in 0 loss of encoding efficiency by segmenting for this encoder.

    Better error handling

    Encoder error will now be printed to the terminal

    Tons of refactoring and optimizations

    • Reducing complexity
    • Moving all the functions out of av1an.py to Av1an module
    • Overall better project organization
    Source code(tar.gz)
    Source code(zip)
  • 1.10(May 19, 2020)

    Target VMAF

    The "Target VMAF" feature has a really simple goal, instead of guessing what the CQ value of your encoder will give you in regards to the resulting quality, we set the VMAF score we want to achieve and let the algorithm get the closest CQ value that will result in that score, for each split segment. Which simultaneously achieve 3 things, if compared to usual, single value CQ encode.

    1. Ensure that complex scenes receive more bit rate to achieve target quality.
    2. Increase quantizer value for simple scenes, and not going lower than target quality, and save bit rate.
    3. Yields a lower total bit rate.

    From my testing, result size can be 50-80% of compared to usual encode, with great visual quality.

    VMAF plotting

    Plot contains VMAF score for each individual frame. Plotting after encode will be performed if flag --vmaf set, or --vmaf_target is used. The plot legend display Mean Average, Lower 1, 25, 75 percentile, which in combination with plot should be insightful enough for judging the quality of the encode instead of a single vmaf value for the whole video.

    Example

    This is a plot of the encode that was using Target VMAF 96 (don't mind nan in mean, it's fixed at time of this post, but i don't want to reencode:) )

    The 1 percentile you can see on this plot shows the likely VMAF score of complex scenes, which usually involves a lot of movement, changes in perspective, zooming in and out and change in video context on every frame.

    Usage

    To try simply run your default constant quality encoding with --vmaf_target N, where N is VMAF score you try to achieve, i suggest something in 90-96 area.

    --vmaf_target sets the VMAF score that Av1an will aim for. --min_cq - sets the lower boundary for the VMAF probes and limit the minimum cq value at which the video can be encoded. Default is 25. --max_cq sets the upper boundary for the VMAF probes and limit the maximum cq value at which the video can be encoded. Default is 50. --vmaf_steps - sets the number of probes that are used to get the best CQ value for the target VMAF. Default value is 4. If min-cq, max_cq are changed that distance between them increase - make sure to set steps that there is a probe for every ~5 CQ of distance

    For more information about previous target VMAF refer to 1.8 release. This is evolution upon that method.

    Also need to mention that Target VMAF currently only for reference AV1 encoder

    Vmaf Plotting as separate package

    VMAF plotting is available as separate package, and only needs VMAF's or ffmpeg's libvmaf result xml file to work. GITHUB_REPO

    Extra splits

    -xs, --extra_splits will add cuts for every N frames on splits that are longer than N, and spread cuts evenly, intelligently. This option help split big scenes or files that are long single scene, like camera recording, into splits that paralleled better.

    For example:

    --extra_splits 400 For a split that goes from frame 14000 to 14900, the distance between cuts is 900, 900/400= 2.25 which rounds to 2. Av1an will try to find 2 key frames that are closer to frame 14300 and 14600 and place the splits there. Another split from 0 to 500, Av1an will find key frame closer to 250 and place cut there. Splits that have less than 400 frames are not affected.

    Source code(tar.gz)
    Source code(zip)
  • 1.9(May 13, 2020)

    (Click on video to watch all new features together) This is quality of life update, most changes under the hood and just make Av1an run better :)

    Dynamic progress bar

    It show progress for all workers and works with AOM and Rav1e, SVT-AV1 will be working with old chunk update frame count.

    Config files

    On first use of -cfg file current encoder, encoding parameters, audio, and FFmpeg settings will be save, and this file can be reused by -cfg file without need of typing same settings again.

    Batch encode

    Multiple files can be encoded by passing them separated by space, -i file1 file2 they will be encoded with same settings. Pyscenedetect will run on each one individually, so keep in mind to not set -s scene_file. And keep in mind that using --resume will try to resume first file in queue.

    AUR

    Av1an now available for installation by Arch Linux AUR

    Source code(tar.gz)
    Source code(zip)
  • 1.8(Apr 24, 2020)

    This is a big update. I want to thank all the collaborators that worked on features, or helped with development. I really appreciate everyone's engagement with this project

    Pip package

    • PIP package allows super easy installation on all platforms, by single command.
    • Usage is simple as av1an -i file all_params..., package automatically available system wide.
    • Only extra requirements for work is to have encoder of choice, and FFmpeg installed.

    Vmaf plotting

    If --vmaf is specified (and vmaf configured correctly on system) at end of encode will be plotted Vmaf for each segment of a video, where Y is vmaf, and X is frames and drawn median line.

    Target Vmaf mode

    !This feature is experimental. It certainly will be changed and improved in the future. Desired results are not guaranteed, use with caution;) Best works with 720/1080 videos. Feedback and suggestions will be appreciated:+1:

    By making couple of few fps encoding probes at fastest cpu-used, it's possible to interpolate cq values to Vmaf relations and with certain error predict what Vmaf score full encode will have. Example of interpolation based on 4 probes. Orange dot is chosen cq value extracted from plot to get targeted Vmaf. Red crosses are probe results. In folder of encoding this plot will be generated for each segment.

    Result with same video as in previous vmaf plot :

    --tg_vmaf N - for using Target Vmaf mode specify , where N is desired vmaf number, most stable results in range 90-95 Vmaf.

    --vmaf_steps N - number of evenly spaced probes that is used to interpolate vmaf to cq change. N bigger than 3. Optimal is 4-6 probes. Default: 4.

    --vmaf_error N - decrease initial Vmaf values for interpolation. Increasing number will result in lower CQ and bigger final vmaf score, use to correct whole vmaf plot. For start If target vmaf undershoot increase value by undershoot amount. Default: 0.

    --min_cq, --max_cq - minimum and maximum CQ values used in interpolation.Use to limit CQ values range. Default: 20, 63.

    Added VP9, VP8 support

    Added support for vpx

    Source code(tar.gz)
    Source code(zip)
  • 1.7.1(Apr 6, 2020)

    Vmaf

    • !Requires FFmpeg with libvmaf enabled
    • Added --vmaf option. Show vmaf for each encoded segment.
    • Added --vmaf_path for custom vmaf models path.

    Refactoring

    • Changed all lookup, now faster and better.

    Instant resume

    • All data about total frame count and chunks are stored in done.txt file and encode can be resumed instantly.

    Specify temp folder

    • Added --temp option to specify custom temporally folder to use.
    Source code(tar.gz)
    Source code(zip)
  • 1.7(Mar 18, 2020)

    Boosting

    Decreasing CQ value of encoded scene depending on brightness value of scene. (AOM only) Intended to improve encoding quality of dark scenes.

    How it works

    For each scene geometrical mean of brightness is calculated from average brightness of every frame. If brightness of scene is lower than 128, CQ will be decreased, lower values of brightness will receive bigger CQ decrease. CQ change range relates to 0-128 brightness range, meaning that the lowest value CQ will be at 0 brightness, half of that value at 64, etc.. For example (encoding cq = 40) :

    Usage

    • Enable with --boost
    • Option -br will set maximum CQ change from original. Default: 15
    • Option -bl will set hard cap on how low CQ can be set by boosting. Default: 10

    Example

    If this option --boost -br 30 -bl 15 set with --cq-level=30 will mean that we set boost CQ range to 30, and hard limit for boost at 15 . In this example all scenes with brightness 64 and lower will have minimum value of cq (15)

    Source code(tar.gz)
    Source code(zip)
  • 1.6.1(Mar 17, 2020)

    • Better logging
    • Added --keep option. Not deleting temporally folders after encode is finished
    • Fix for Windows terminal overflow
    • Files with spaces are fine now :)
    Source code(tar.gz)
    Source code(zip)
  • 1.6(Mar 2, 2020)

    This is a small update, preparing for 1.7 ;)

    Example of Av1an with Threadripper 3970X

    ~9-12 Fps 1080p AOM encoding with cpu-used 3

    Finished personal encodes:

    The End of Evangelion (1997) 1080p

    1920x1080 23.98 fps 01:26:47 bitrate: 1880 kb/s
    Av1an: -w 32 -v ' --end-usage=vbr --target-bitrate=2000 --aq-mode=2 --threads=4 
    --arnr-maxframes=15 --enable-fwd-kf=1 --lag-in-frames=25 --cpu-used=3 ' 
    Finished in 15690.8s (4:03:32)
    Size: 1.14 GB
    

    link mb in AV1 DISCORD :)

    Fractalicious8 4K

    3840x1920 30 fps 04:49.28 12665 kb/s
    Av1an: -w 16 -v "--end-usage=q --cq-level=30 --cpu-used=3 --threads=16 "
    Queue: 80 Workers: 16 Passes: 2
    Finished in 8767.3s (2:26 min)
    

    Original Encode

    Fractalicious8 1080p

    1920x960 30 fps 04:49.28 3720 kb
    Av1an: -ff ' -vf scale=1920:-1:flag' -v  ' --end-usage=q --cq-level=30 
    --cpu-used=3 --threads=16
    Queue: 80 Workers: 24 Passes: 2
    Finished in  1538.5s or (0:25 min)
    

    Original Encode

    Reworked logging

    Example of logging file

    • New logging file contain all info about encode, started and finished chunks, their frame counters and individual speed
    • If no logging set, logging file will be created in .temp/

    Testing

    Started work on making automated tests for Av1an

    Troubleshooting and weird errors

    • A lot of fixes for ffmpeg usage
    • General code improvements
    Source code(tar.gz)
    Source code(zip)
    Av1an_1.6_windows.zip(70.24 MB)
  • 1.5(Feb 18, 2020)

    Major Changes

    Progress is shown in frames

    • Click on image for video.
    • Bar updates after every finished chunk by the amount of frames from the source clip
    • Total size of bar is source number of frames

    FFprobe deprecated

    FFprobe not longer needed for Av1an, all FFprobe calls replaced with FFmpeg.

    ~25 times faster frame count checking

    Redone frame count checking. Speed comparison for frame count check: FFprobe: 4.99s FFprobe with autothreads: 1.039 FFmpeg: 0.209s

    Minor changes

    • General code improvements
    • Console look
    • Bugs of fixes
    Source code(tar.gz)
    Source code(zip)
    Av1an_1.5_windows.zip(70.26 MB)
  • 1.4(Feb 14, 2020)

    Major Changes

    Resuming encoding without loss of finished chunks.

    • Click on image for video.

    • Only finished chunks are saved.

    • .temp folder must be presented and not changed from when encoding was stopped.

    • All stages before encoding must be passed for resuming.(scenes split, audio processed)

    • Resuming skips scene detection, video split, audio processing.

    Checking encoded clips for errors in frame count.

    • Click on image for video. Changed frame rate so encoded clips pop warnings.

    There has been reports that some files with some encoders at different setting drop frames at time of encoding (works on my machine), so this measure were included to warn about possible problems as soon as possible. If the amount of frames need to change (changing frame rate, etc..) use --no_check option which disables frame checking completely, also can save you second or two on your 80 hours encodes :)

    Improved console look

    Best one yet :)

    Minor Improvements

    • Better audio checking and extraction.
    • Fixed bug with ffmpeg concat not working.
    • Added option to skip split completely and encode whole file with single encoder -s 0. (boring)
    • Faster and better default settings for Aom, Rav1e + enabled multi threading and cut number of workers.
    • Overall better error handling.
    • Logging now works on Windows.

    Great appreciations to 🥇 @nicomem 🥇

    For a lot of changes, pull's and improvements

    Source code(tar.gz)
    Source code(zip)
    Av1an_1.4_windows.upd.Rav1e.Aom.zip(92.23 MB)
  • 1.3(Feb 6, 2020)

    • PySceneDetect integrated, less complexity, faster.
    • Autobuilds for Windows Executable at Appveyor.
    • Windows builds only require FFmpeg, FFprobe, Encoder in main folder with executable.
    • At current moment Working Encoders for Windows are AOM (Full), Rav1e ( 1-pass, 2 pass is bugged), SVT-AV1 don't want to work with pipes at all :(
    • Attached files are ready to use and contain latest Rav1e and Aom
    Source code(tar.gz)
    Source code(zip)
    Av1an_1.3_windows.zip(88.68 MB)
  • 1.2(Feb 5, 2020)

    • Major fix for AOM/SVT-AV1 droping encoded frames on some sources.
    • Now supporting all FFmpeg options with -ff
    • Now supporting all PySceneDetect options with -sc
    • and some more
    Source code(tar.gz)
    Source code(zip)
  • 1.1(Feb 1, 2020)

    • Fully redone video splitting. FFmpeg segment used instead of scenedetect pymkv
    • Reusing scenedetect results with different runs for same video. Look -s for instructions
    • Removed dependence on python-pymkv
    • Cleaner console look, less lines, better presenting of useful information
    • Fixed printing of large amount of output to Windows console
    • Better error handling
    Source code(tar.gz)
    Source code(zip)
  • 1.0(Jan 27, 2020)

    • Spliting Video by Scenes

    • 8/10 bit encoding

    • AOMENC encoding 1,2 pass.

    • SVT-AV1 encoding 1,2 pass.

    • Rav1e encoding 1 pass only.

    • Avif AOMENC encoding.

    • Avif Rav1e encoding.

    Source code(tar.gz)
    Source code(zip)
High-performance cross-platform Video Processing Python framework powerpacked with unique trailblazing features :fire:

Releases | Gears | Documentation | Installation | License VidGear is a High-Performance Video Processing Python Library that provides an easy-to-use,

Abhishek Thakur 2k Dec 2, 2021
OpenShot Video Editor is an award-winning free and open-source video editor for Linux, Mac, and Windows, and is dedicated to delivering high quality video editing and animation solutions to the world.

OpenShot Video Editor is an award-winning free and open-source video editor for Linux, Mac, and Windows, and is dedicated to delivering high quality v

OpenShot Studios, LLC 2.4k Nov 23, 2021
A tool to fuck a video/audio quality using FFmpeg

Media quality fucker A tool to fuck a video/audio quality using FFmpeg How to use Download the source Download Python Extract FFmpeg Put what you want

Maizena 5 Nov 8, 2021
Image and video quality assessment

CenseoQoE: 视觉感知画质评价框架 项目介绍 图像/视频在编解码、传输和显示等过程中难免引入不同类型/程度的失真导致图像质量下降。图像/视频质量评价(IVQA)的研究目标是希望模仿人类视觉感知系统, 通过算法评估图片/视频在终端用户的眼中画质主观体验的好坏,目前在视频编解码、画质增强、画质监。

Tencent 46 Dec 1, 2021
Python and OpenCV-based scene cut/transition detection program & library.

Video Scene Cut Detection and Analysis Tool Latest Release: v0.5.6.1 (October 11, 2021) Main Webpage: py.scenedetect.com Documentation: manual.scenede

Brandon Castellano 1.4k Nov 25, 2021
Convert lecture videos to slides in one line. Takes an input of a directory containing your lecture videos and outputs a directory containing .PDF files containing the slides of each lecture.

Convert lecture videos to slides in one line. Takes an input of a directory containing your lecture videos and outputs a directory containing .PDF files containing the slides of each lecture.

Sidharth Anand 12 Nov 25, 2021
A telegram bot for compressing/encoding videos in h264 format.

Video-Encoder-Bot a telegram bot for compressing/encoding videos in h264 format. Configuration Add values in environment variables or add them in conf

Weeb >.< 3 Nov 22, 2021
Bot per controllare la disponibilità di appuntamenti per la vaccinazione Covid-19 in Veneto

VaxBot9000 Prerequisites Python 3.9 Poetry latest version of geckodriver Firefox Setup poetry install Copy config.sample.toml to config.toml and edit

Augusto Zanellato 5 Jun 13, 2021
Baseline is a cross-platform library and command-line utility that creates file-oriented baselines of your systems.

Baselining, on steroids! Baseline is a cross-platform library and command-line utility that creates file-oriented baselines of your systems. The proje

Nelson 1 Nov 11, 2021
Cross Quality LFW: A database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments

Cross-Quality Labeled Faces in the Wild (XQLFW) Here, we release the database, evaluation protocol and code for the following paper: Cross Quality LFW

Martin Knoche 7 Nov 18, 2021
Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrieval.

Dual Encoding for Video Retrieval by Text Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding

null 60 Nov 8, 2021
A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Stream your favorite shows straight from the command line.

A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Installation pip install -r requirements.txt It use

Jonardon Hazarika 14 Nov 18, 2021
Cross-platform desktop synchronization client for the Nuxeo platform.

Nuxeo Drive Desktop Synchronization Client for Nuxeo This is an ongoing development project for desktop synchronization of local folders with remote N

Nuxeo 59 Nov 26, 2021
xonsh is a Python-powered, cross-platform, Unix-gazing shell language and command prompt.

xonsh xonsh is a Python-powered, cross-platform, Unix-gazing shell language and command prompt. The language is a superset of Python 3.6+ with additio

xonsh 5.3k Nov 27, 2021
Used to build an XSS platform on the command line.

pyXSSPlatform Used to build an XSS platform on the command line. Usage: 1.generate the cert file You can use openssl like this: openssl req -new -x509

null 65 Nov 16, 2021
The command line interface for Gradient - Gradient is an an end-to-end MLOps platform

Gradient CLI Get started: Create Account • Install CLI • Tutorials • Docs Resources: Website • Blog • Support • Contact Sales Gradient is an an end-to

Paperspace 48 Nov 19, 2021
Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment (ICCV2021)

Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment This is a pytorch project for the paper Seeing Dynamic Scene i

DV Lab 8 Nov 8, 2021
High-performance cross-platform Video Processing Python framework powerpacked with unique trailblazing features :fire:

Releases | Gears | Documentation | Installation | License VidGear is a High-Performance Video Processing Python Library that provides an easy-to-use,

Abhishek Thakur 2k Dec 2, 2021
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 109 Nov 15, 2021
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 25 Nov 29, 2021
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object compositions and views.

null 57 Nov 25, 2021
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 23 Nov 10, 2021
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 30 Nov 20, 2021
A cd command that learns - easily navigate directories from the command line

NAME autojump - a faster way to navigate your filesystem DESCRIPTION autojump is a faster way to navigate your filesystem. It works by maintaining a d

William Ting 13.2k Dec 1, 2021
AML Command Transfer. A lightweight tool to transfer any command line to Azure Machine Learning Services

AML Command Transfer (ACT) ACT is a lightweight tool to transfer any command from the local machine to AML or ITP, both of which are Azure Machine Lea

Microsoft 5 Dec 2, 2021
Turn (almost) any Python command line program into a full GUI application with one line

Gooey Turn (almost) any Python 2 or 3 Console Program into a GUI application with one line Support this project Table of Contents Gooey Table of conte

Chris 15.1k Dec 2, 2021
Oppia a free, online learning platform to make quality education accessible for all.

Oppia is an online learning tool that enables anyone to easily create and share interactive activities (called 'explorations'). These activities simulate a one-on-one conversation with a tutor, making it possible for students to learn by doing while getting feedback.

Oppia 4.3k Dec 1, 2021