Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Overview

Streaming speaker diarization

Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation
by Juan Manuel Coria, Hervé Bredin, Sahar Ghannay and Sophie Rosset.

We propose to address online speaker diarization as a combination of incremental clustering and local diarization applied to a rolling buffer updated every 500ms. Every single step of the proposed pipeline is designed to take full advantage of the strong ability of a recently proposed end-to-end overlap-aware segmentation to detect and separate overlapping speakers. In particular, we propose a modified version of the statistics pooling layer (initially introduced in the x-vector architecture) to give less weight to frames where the segmentation model predicts simultaneous speakers. Furthermore, we derive cannot-link constraints from the initial segmentation step to prevent two local speakers from being wrongfully merged during the incremental clustering step. Finally, we show how the latency of the proposed approach can be adjusted between 500ms and 5s to match the requirements of a particular use case, and we provide a systematic analysis of the influence of latency on the overall performance (on AMI, DIHARD and VoxConverse).

Citation

Paper currently under review.

Installation

  1. Create environment:
conda create -n diarization python==3.8
conda activate diarization
  1. Install the latest PyTorch version following the official instructions

  2. Install dependencies:

pip install -r requirements.txt

Usage

CLI

Stream a previously recorded conversation:

python main.py /path/to/audio.wav

Or use a real audio stream from your microphone:

python main.py microphone

This will launch a real-time visualization of the diarization outputs as they are produced by the system:

Example of a state of the real-time output plot

By default, the script uses step = latency = 500ms, and it sets reasonable values for all hyper-parameters. See python main.py -h for more information.

API

We provide various building blocks that can be combined to process an audio stream. Our streaming implementation is based on RxPY, but the functional module is completely independent.

In this example we show how to obtain speaker embeddings from a microphone stream with Equation 2:

from sources import MicrophoneAudioSource
from functional import FrameWiseModel, ChunkWiseModel, OverlappedSpeechPenalty, EmbeddingNormalization

mic = MicrophoneAudioSource(sample_rate=16000)

# Initialize independent modules
segmentation = FrameWiseModel("pyannote/segmentation")
embedding = ChunkWiseModel("pyannote/embedding")
osp = OverlappedSpeechPenalty(gamma=3, beta=10)
normalization = EmbeddingNormalization(norm=1)

# Branch the microphone stream to calculate segmentation
segmentation_stream = mic.stream.pipe(ops.map(segmentation))
# Join audio and segmentation stream to calculate speaker embeddings
embedding_stream = rx.zip(mic.stream, segmentation_stream).pipe(
    ops.starmap(lambda wave, seg: (wave, osp(seg))),
    ops.starmap(embedding),
    ops.map(normalization)
)

embedding_stream.suscribe(on_next=lambda emb: print(emb.shape))

mic.read()

Output:

(4, 512)
(4, 512)
(4, 512)
...

Reproducible research

Table 1

In order to reproduce the results of the paper, use the following hyper-parameters:

Dataset latency tau rho delta
DIHARD III any 0.555 0.422 1.517
AMI any 0.507 0.006 1.057
VoxConverse any 0.576 0.915 0.648
DIHARD II 1s 0.619 0.326 0.997
DIHARD II 5s 0.555 0.422 1.517

For instance, for a DIHARD III configuration, one would use:

python main.py /path/to/file.wav --latency=5 --tau=0.555 --rho=0.422 --delta=1.517 --output /output/dir

And then to obtain the diarization error rate:

from pyannote.metrics.diarization import DiarizationErrorRate
from pyannote.database.util import load_rttm

metric = DiarizationErrorRate()
hypothesis = load_rttm("/output/dir/output.rttm")
hypothesis = list(hypothesis.values())[0]  # Extract hypothesis from dictionary
reference = load_rttm("/path/to/reference.rttm")
reference = list(reference.values())[0]  # Extract reference from dictionary

der = metric(reference, hypothesis)

For convenience and to facilitate future comparisons, we also provide the expected outputs in RTTM format corresponding to every entry of Table 1 and Figure 5 in the paper. This includes the VBx offline baseline as well as our proposed online approach with latencies 500ms, 1s, 2s, 3s, 4s, and 5s.

Figure 5

License

MIT License

Copyright (c) 2021 Université Paris-Saclay
Copyright (c) 2021 CNRS

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Comments
  • Is it possible to pass raw byte data into the pipeline?

    Is it possible to pass raw byte data into the pipeline?

    I'm currently working on a implementation where I occasionally receive raw byte data from a TCP socket. I want to pass this data into the pipeline but the current AudioSource:s seem to be limited to microphone input and audio files. Does the current version support what I'm trying to implement or do I have to write it myself?

    duplicate feature question 
    opened by chanleii 16
  • Question about changing model in system and real-time issue

    Question about changing model in system and real-time issue

    Hi @juanmc2005 ! it's me again, sorry for bothering you... I have several questions

    1. I have already fine-tuned a segmentation model with other dataset (not AIM, VoxConverse) and store in ".ckpt" model. I sure that it can be loaded by using ckpt_path = "path/where/store/ckpt/epoch=9_best.ckpt" Model.from_pretrained(ckpt_path, strict=False) i want to put replace this model in your system for some testing. But when i change here by segmentation: PipelineModel = Model.from_pretrained('path/where/store/ckpt/epoch=9_best.ckpt'). When using "python -m diart.stream", the system is still using "pyannote/segmentation". could you give me some hints?
    2. Is there any useful augmentation method for training or fine-tuning? I know that can add back noise or something. Where should i add them?
    3. this system can use for real-time right (streaming)? If yes, how to confirm "real-time"? like real-time factor or something else?

    Thanks for your awesome project, it helps me a lot!! Looking forward to your reply~~ Best Regards

    question 
    opened by Shoawen0213 11
  • Example on README doesn't work

    Example on README doesn't work

    Hi,

    The following example on top level README doesn't work:

    from sources import MicrophoneAudioSource
    from functional import FrameWiseModel, ChunkWiseModel, OverlappedSpeechPenalty, EmbeddingNormalization
    
    mic = MicrophoneAudioSource(sample_rate=16000)
    
    # Initialize independent modules
    segmentation = FrameWiseModel("pyannote/segmentation")
    embedding = ChunkWiseModel("pyannote/embedding")
    osp = OverlappedSpeechPenalty(gamma=3, beta=10)
    normalization = EmbeddingNormalization(norm=1)
    
    # Branch the microphone stream to calculate segmentation
    segmentation_stream = mic.stream.pipe(ops.map(segmentation))
    # Join audio and segmentation stream to calculate speaker embeddings
    embedding_stream = rx.zip(mic.stream, segmentation_stream).pipe(
        ops.starmap(lambda wave, seg: (wave, osp(seg))),
        ops.starmap(embedding),
        ops.map(normalization)
    )
    
    embedding_stream.suscribe(on_next=lambda emb: print(emb.shape))
    
    mic.read()
    

    First of all there is a type in declaring variable ops (it's been declared as osp).

    Secondly, in line segmentation_stream = mic.stream.pipe(ops.map(segmentation)), there is no method named map in variable ops.

    Thanks!

    bug documentation 
    opened by abhinavkulkarni 11
  • PortAudio Windows Error and Compatibility on Python3.7

    PortAudio Windows Error and Compatibility on Python3.7

    Hi, @juanmc2005

    Is there a chance that diart can run on windows since PortAudio library is only supported on Linux ?

    Also, if its possible to import Literal from typing_extensions(py3.7) instead of Typing(py3.8)

    I get this error while running this demo script :

    import rx
    import rx.operators as ops
    import diart.operators as myops
    from diart.sources import MicrophoneAudioSource
    import diart.functional as fn
    
    sample_rate = 16000
    mic = MicrophoneAudioSource(sample_rate)
    

    error : ImportError: cannot import name 'Literal' from 'typing' (/usr/lib/python3.7/typing.py)

    feature 
    opened by Yagna24 9
  • My output is different from expected_outputs

    My output is different from expected_outputs

    I try to use the following command test the model: python src/main.py path/to/voxconverse_test_wav/aepyx.wav --latency=0.5 --tau=0.576 --rho=0.915 --delta=0.648 --output results/vox_test_0.5s But I compared my model output with 'expected_outputs/online/0.5s/VoxConverse.rttm', and found that my output ended early for example: The last line of my output is SPEAKER aepyx 1 149.953 1.989 <NA> <NA> speaker0 <NA> <NA> But regarding the last line of the 'expected_outputs/online/0.5s/VoxConverse.rttm' of the aepyx.wav file is SPEAKER aepyx 1 168.063 0.507 <NA> <NA> F <NA> <NA> They ended at different times. I don’t know if the command I entered is wrong, or if it is due to other reasons

    bug 
    opened by sablea 9
  • Question about reproduce the result

    Question about reproduce the result

    Hi! It's me again, sorry for bothering you. I have several questions... Q1. I try to reproduce the results of the paper by using the following hyper-parameters. image SO~ I test them on the AMI test dataset and VoxConverse test dataset, but the result seems different. In the AMI dataset, there are 24 wav files. I wrote a script containing several python codes to do the testing. For each wav file, i use the command provided below. python -m diart.demo $wavpath --tau=0.507 --rho=0.006 --delta=1.057 --output ./output/AMI_dia_fintetune/ After obtain each rttm files, i calculate the DER for each wav files, like "der = metric(reference, hypothesis)" The reference rttm is from "AMI/MixHeadset.test.rttm" them I calculate "sum of DER"/total file(24files in this case), i got 0.3408511916216123 (which means 34.085% DER). Do i do something wrong....? I could provide the rttm or the DER for each wav file. The VoxConverse dataset is still processing. I'm afraid I misunderstood something, so I ask about the problem first...

    BTW, I use the pyannote v1.1 to do the same things, them i got 0.48516973769490174 as final DER. # v1.1 import torch pipeline = torch.hub.load("pyannote/pyannote-audio", "dia") diarization = pipeline({"audio": "xxx.wav"}) So o'm afraid that i did something wrong....

    Q2 At the same time, I have another question. image Here show that you try lots of methods with different latency. Does python -m diart.demo using the "5.0 latency" way which has the greatest result in the paper? If the answer is yes, how to change different model for other latency for inference? And how to train this part?

    Again, thanks for your awesome project!! Sorry for all those stupid questions... Looking forward to your reply...

    question 
    opened by Shoawen0213 8
  • Question about Optuna on own pipeline with segmentation model

    Question about Optuna on own pipeline with segmentation model

    hi, @juanmc2005 sorry for bothering you! as title, I want to use optuna to tuning parameters, and the segmentation model used in pipeline is my own trained follow training_a_model.ipynb. And i have already saved in "ckpt" format when i followed the "Custom models" part, try to define own segmentation model like

    class MySegmentationModel(SegmentationModel):
       def __init__(self):
            super().__init__()
            self.my_pretrained_model = torch.load("./epoch=0-step=69.ckpt")  **<- put my own ckpt here**
        def __call__(
            self,
            waveform: torch.Tensor,
            weights: Optional[torch.Tensor] = None
        ) -> torch.Tensor:
            return self.my_pretrained_model(waveform, weights)
    

    then, i redefine the config like config = PipelineConfig(segmentation=MySegmentationModel()) optimizer = Optimizer(benchmark, config, hparams, p) it comes out the error shown as below image

    then i trace back, and it seems that it can detect duration, sample rate, or sth could you tell me how to fix these problems or I can't use my segmentation model in optuna?

    I want to do this because i guess the parameter will be affected in different model, so i want to give it a try

    Thanks for your awesome work and help!!! expected your response!

    documentation question 
    opened by Shoawen0213 6
  • Crash on `MicrophoneAudioSource.read()`

    Crash on `MicrophoneAudioSource.read()`

    I have an issue with the audio access with the example code from the README, probably due to my audio setup:

    >>> mic.read()
    torch.Size([4, 512])
    python: src/hostapi/alsa/pa_linux_alsa.c:3641: PaAlsaStreamComponent_BeginPolling: Assertion `ret == self->nfds' failed.
    Aborted (core dumped)
    

    This is strange because I'm running this in a Docker container and I also use the sounddevice library without problems in another project in the same container.

    Originally posted by @KannebTo in https://github.com/juanmc2005/StreamingSpeakerDiarization/issues/8#issuecomment-949453753

    bug 
    opened by KannebTo 6
  • Creating a websockets based streaming API endpoint for diarization

    Creating a websockets based streaming API endpoint for diarization

    Hi Team,

    Thanks for the great work with this project. I tried the model with few audio samples and it seems to work great!

    I was wondering if there is any interest in developing a streaming websocket-based API endpoint to serve the model output. I went through the code and while I am not familiar with the reactive programming paradigm (RxPY), it looked like with some effort, it should be possible to develop a streaming solution.

    Let me know what you all think.

    Thanks and keep up the great work!

    question 
    opened by abhinavkulkarni 5
  • Question about Table 1

    Question about Table 1

    Hello!! Thanks for your sharing !! It's a masterpiece work! I am just confused about what is "oracle segmentation" in the Table.1 I can't get it~I try to survey it but failed to get the answer. thanks!

    question 
    opened by Shoawen0213 4
  • How to fetch audio segments in real time from diarization pipeline

    How to fetch audio segments in real time from diarization pipeline

    Hi, First of all thank you for this repo.

    I was wondering if its possible to use the diarization pipeline to fetch the audio segments of discrete speakers. It would immensely help if there's a way around it.

    question 
    opened by Yagna24 4
  • Client for the websocket server

    Client for the websocket server

    Hello,

    I'm trying to use the websocket server for diarization. I implemented a client as it is not provided. I get an output for the diarization but it is always speaker0.

    Here is the js code for my client:

    // Set up the start and stop buttons.
    document.getElementById('start-button').onclick = start;
    document.getElementById('stop-button').onclick = stop;
    
    // Global variables to hold the audio stream and the WebSocket connection.
    var stream;
    var socket;
    
    function b64encode(chunk) {
        // Convert the chunk array to a Float32Array
        const bytes = new Float32Array(chunk).buffer;
    
        // Encode the bytes as a base64 string
        let encoded = btoa(String.fromCharCode.apply(null, new Uint8Array(bytes)));
    
        // Return the encoded string as a UTF-8 encoded string
        return decodeURIComponent(encoded);
    }
    
    
    function start() {
        // Disable the start button.
        document.getElementById('start-button').disabled = true;
        // Enable the stop button.
        document.getElementById('stop-button').disabled = false;
    
        // Get access to the microphone.
        navigator.mediaDevices.getUserMedia({ audio: true }).then(function (s) {
            stream = s;
    
            // Create a new WebSocket connection.
            socket = new WebSocket('ws://localhost:7007');
    
            // When the WebSocket connection is open, start sending the audio data.
            socket.onopen = function () {
                var audioContext = new AudioContext();
                var source = audioContext.createMediaStreamSource(stream);
                var processor = audioContext.createScriptProcessor(1024, 1, 1);
                source.connect(processor);
                processor.connect(audioContext.destination);
    
                processor.onaudioprocess = function (event) {
                    var data = event.inputBuffer.getChannelData(0);
                    // var dataString = String.fromCharCode.apply(null, new Uint16Array(data));
                    console.log("sending")
                    socket.send(b64encode(data));
                };
            };
    
            socket.onmessage = function (e) {
                console.log(e);
            }
        }).catch(function (error) {
            console.error('Error getting microphone input:', error);
        });
    }
    
    function stop() {
        // Disable the stop button.
        document.getElementById('stop-button').disabled = true;
        // Enable the start button.
        document.getElementById('start-button').disabled = false;
    
        // Close the WebSocket connection.
        socket.close();
        // Stop the audio stream.
        stream.getTracks().forEach(function (track) { track.stop(); });
    }
    

    Do you have the code for the client you used ? I would like to make sure that I got that right before looking anywhere else.

    Best,

    question 
    opened by funboarder13920 0
  • CLI Refactoring: Use jsonargparse

    CLI Refactoring: Use jsonargparse

    Problem

    The implementation of the CLI is a bit messy and mixed with the python API.

    Idea

    Use jsonargparse to group diart.stream, diart.tune and diart.benchmark into a single diart.py with automatic CLI documentation (i.e. removing argdoc.py). The new API would remove the dot (diart stream instead of diart.stream), --help would be generated from the docstring, and a class Diart could contain the implementation of all three scripts.

    refactoring 
    opened by juanmc2005 0
  • Changes to files not taking effect

    Changes to files not taking effect

    I'm relatively inexperienced when it comes to Python

    when I make changes to the source code, none of the changes take effect when I run the script.

    I've been trying to find solutions for days and I have had no luck.

    Is it something specific to this project that is preventing changes from being made?

    For instance, I simply tried to change the output of the print statement for the reports that are being printed in the console after the plot window is closed and even the simplest change is not taking effect.

    Any help would be GREATLY appreciated, Thank You!

    unrelated 
    opened by jason-daiuto 1
  • Update gif code snippet with newest API

    Update gif code snippet with newest API

    The gif in the README is not up to date with the newest changes in the API. It may be better to wait a bit for the API to stabilize before doing this.

    documentation 
    opened by juanmc2005 0
  • Add inference lifecycle hooks

    Add inference lifecycle hooks

    Problem

    When writing custom models or pipelines, one may want to react to specific inference events, for example before/after benchmarking on a file.

    Idea

    Add classes RealTimeInferenceHook and BenchmarkHook to define listener interfaces. This can also be used to implement other behavior like progress bars, writing results, etc.

    Example

    class MyHook(BenchmarkHook):
        def on_finished(results):
            print("Finished!")
    
    benchmark = Benchmark(..., hooks=[MyHook()])
    
    feature 
    opened by juanmc2005 0
Releases(v0.6)
  • v0.6(Oct 31, 2022)

    What's Changed

    • Compatibility with torchaudio streams by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/91
    • Online speaker diarization as a block by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/92
    • Fix bug: RTTM output not being patched when closing plot window by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/100
    • Add cropping_mode to DelayedAggregation by @bhigy in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/105
    • Compatibility with pyannote.audio 2.1.1 requirements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/108

    New Contributors

    • Thank you @bhigy for the bug hunting in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/105!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.5.1...v0.6

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Aug 31, 2022)

    What's Changed

    • Fix wrong config reference and unpatched annotation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/89

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.5...v0.5.1

    Source code(tar.gz)
    Source code(zip)
  • v0.5(Aug 31, 2022)

    What's Changed

    • Add study_or_path as a Path for conversion from string by @AMITKESARI2000 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/74
    • Update WebSocketAudioSource by @ckliao-nccu in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/78
    • Fix bug with empty RTTMs by @zaouk in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/81
    • Add websocket compatibility + other improvements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/77
    • Export csv report in diart.benchmark when output is provided by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/86

    New Contributors

    • @AMITKESARI2000 made their first contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/74
    • @ckliao-nccu made their first contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/78

    Acknowledgements

    Thank you @AMITKESARI2000, @ckliao-nccu and @zaouk for all the bug fixes!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.4...v0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.4(Jul 13, 2022)

    What's Changed

    • Replace resolve_features with TemporalFeatureFormatter by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/59
    • Make pyannote.audio optional (still mandatory to run default pipeline) by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/61
    • Minor features and improvements by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/64
    • Adds documentation for some of the classes and methods by @zaouk in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/31
    • Add hyper-parameter tuning with optuna by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/65

    New Contributors

    • Thank you @zaouk for your contribution in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/31 !

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.3...v0.4

    Source code(tar.gz)
    Source code(zip)
  • v0.3(May 20, 2022)

    What's Changed

    • Python 3.7 compatibility and PortAudio error fix by @Yagna24 in #29
    • Add citation by @hbredin in #38
    • Benchmark script + improvements and bug fixes by @juanmc2005 in #46
    • Improve API names by @juanmc2005 in #47
    • Add OverlapAwareSpeakerEmbedding class by @juanmc2005 in #51
    • Add inference API with RealTimeInference and Benchmark by @juanmc2005 in #55

    New Contributors

    • Thank you @Yagna24 for your contribution in python 3.7 compatibility!

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.2.1...v0.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jan 26, 2022)

    What's Changed

    • Fix empty segment in buffer_output causing a crash by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/24

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.2...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2(Jan 7, 2022)

    What's Changed

    • Replace operators.aggregate() with functional.DelayedAggregation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/16
    • Add Hamming-weighted average to DelayedAggregation by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/18
    • Asynchronous microphone reading by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/19
    • Modular OutputBuilder + better demo performance by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/20
    • Improve README by @juanmc2005 in https://github.com/juanmc2005/StreamingSpeakerDiarization/pull/21

    Full Changelog: https://github.com/juanmc2005/StreamingSpeakerDiarization/compare/v0.1...v0.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1(Dec 15, 2021)

Owner
Juanma Coria
PhD Student working on Continual Representation Learning
Juanma Coria
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Renat Bashirov 118 Dec 26, 2022
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

null 10 Mar 16, 2022
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 692 Dec 29, 2022
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
Demo project for real time anomaly detection using kafka and python

kafkaml-anomaly-detection Project for real time anomaly detection using kafka and python It's assumed that zookeeper and kafka are running in the loca

Rodrigo Arenas 36 Dec 12, 2022
This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust.

Demo BERT ONNX pipeline written in rust This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust. R

Xavier Tao 14 Dec 17, 2022
Plaything for Autistic Children (demo for PaddlePaddle/Wechaty/Mixlab project)

星星的孩子 - 一款为孤独症孩子设计的聊天机器人游戏 孤独症儿童是目前常常被忽视的一类群体。他们有着类似性格内向的特征,实际却受着广泛性发育障碍的折磨。 项目背景 这类儿童在与人交往时存在着沟通障碍,其特点表现在: 社交交流差,互动障碍明显 认知能力有限,被动认知 兴趣狭窄,重复刻板,缺乏变化和想象

Tianyi Pan 35 Nov 24, 2022
Industrial knn-based anomaly detection for images. Visit streamlit link to check out the demo.

Industrial KNN-based Anomaly Detection ⭐ Now has streamlit support! ⭐ Run $ streamlit run streamlit_app.py This repo aims to reproduce the results of

aventau 102 Dec 26, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
RealTime Emotion Recognizer for Machine Learning Study Jam's demo

Emotion recognizer Table of contents Clone project Dataset Install dependencies Main program Demo 1. Clone project git clone https://github.com/GDSC20

Google Developer Student Club - UIT 1 Oct 5, 2021
Omnidirectional Scene Text Detection with Sequential-free Box Discretization (IJCAI 2019). Including competition model, online demo, etc.

Box_Discretization_Network This repository is built on the pytorch [maskrcnn_benchmark]. The method is the foundation of our ReCTs-competition method

Yuliang Liu 266 Nov 24, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

null 2 Oct 20, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

null 2 Oct 20, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 2, 2023
Pytorch GUI(demo) for iVOS(interactive VOS) and GIS (Guided iVOS)

GUI for iVOS(interactive VOS) and GIS (Guided iVOS) GUI Implementation of CVPR2021 paper "Guided Interactive Video Object Segmentation Using Reliabili

Yuk Heo 13 Dec 9, 2022