This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.

Overview

UIS-RNN

Build Status Python application PyPI Version Python Versions Downloads codecov Documentation

Overview

This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm. UIS-RNN solves the problem of segmenting and clustering sequential data by learning from examples.

This algorithm was originally proposed in the paper Fully Supervised Speaker Diarization.

The work has been introduced by Google AI Blog.

gif

Disclaimer

This open source implementation is slightly different than the internal one which we used to produce the results in the paper, due to dependencies on some internal libraries.

We CANNOT share the data, code, or model for the speaker recognition system (d-vector embeddings) used in the paper, since the speaker recognition system heavily depends on Google's internal infrastructure and proprietary data.

This library is NOT an official Google product.

We welcome community contributions (guidelines) to the uisrnn/contrib folder. But we won't be responsible for the correctness of any community contributions.

Dependencies

This library depends on:

  • python 3.5+
  • numpy 1.15.1
  • pytorch 1.3.0
  • scipy 1.1.0 (for evaluation only)

Getting Started

YouTube

Install the package

Without downloading the repository, you can install the package by:

pip3 install uisrnn

or

python3 -m pip install uisrnn

Run the demo

To get started, simply run this command:

python3 demo.py --train_iteration=1000 -l=0.001

This will train a UIS-RNN model using data/toy_training_data.npz, then store the model on disk, perform inference on data/toy_testing_data.npz, print the inference results, and save the averaged accuracy in a text file.

PS. The files under data/ are manually generated toy data, for demonstration purpose only. These data are very simple, so we are supposed to get 100% accuracy on the testing data.

Run the tests

You can also verify the correctness of this library by running:

bash run_tests.sh

If you fork this library and make local changes, be sure to use these tests as a sanity check.

Besides, these tests are also great examples for learning the APIs, especially tests/integration_test.py.

Core APIs

Glossary

General Machine Learning Speaker Diarization
Sequence Utterance
Observation / Feature Embedding / d-vector
Label / Cluster ID Speaker

Arguments

In your main script, call this function to get the arguments:

model_args, training_args, inference_args = uisrnn.parse_arguments()

Model construction

All algorithms are implemented as the UISRNN class. First, construct a UISRNN object by:

model = uisrnn.UISRNN(args)

The definitions of the args are described in uisrnn/arguments.py. See model_parser.

Training

Next, train the model by calling the fit() function:

model.fit(train_sequences, train_cluster_ids, args)

The definitions of the args are described in uisrnn/arguments.py. See training_parser.

The fit() function accepts two types of input, as described below.

Input as list of sequences (recommended)

Here, train_sequences is a list of observation sequences. Each observation sequence is a 2-dim numpy array of type float.

  • The first dimension is the length of this sequence. And the length can vary from one sequence to another.
  • The second dimension is the size of each observation. This must be consistent among all sequences. For speaker diarization, the observation could be the d-vector embeddings.

train_cluster_ids is also a list, which has the same length as train_sequences. Each element of train_cluster_ids is a 1-dim list or numpy array of strings, containing the ground truth labels for the corresponding sequence in train_sequences. For speaker diarization, these labels are the speaker identifiers for each observation.

When calling fit() in this way, please be very careful with the argument --enforce_cluster_id_uniqueness.

For example, assume:

train_cluster_ids = [['a', 'b'], ['a', 'c']]

If the label 'a' from the two sequences refers to the same cluster across the entire dataset, then we should have enforce_cluster_id_uniqueness=False; otherwise, if 'a' is only a local indicator to distinguish from 'b' in the 1st sequence, and to distinguish from 'c' in the 2nd sequence, then we should have enforce_cluster_id_uniqueness=True.

Also, please note that, when calling fit() in this way, we are going to concatenate all sequences and all cluster IDs, and delegate to the next section below.

Input as single concatenated sequence

Here, train_sequences should be a single 2-dim numpy array of type float, for the concatenated observation sequences.

For example, if you have M training utterances, and each utterance is a sequence of L embeddings. Each embedding is a vector of D numbers. Then the shape of train_sequences is N * D, where N = M * L.

train_cluster_ids is a 1-dim list or numpy array of strings, of length N. It is the concatenated ground truth labels of all training data.

Since we are concatenating observation sequences, it is important to note that, ground truth labels in train_cluster_id across different sequences are supposed to be globally unique.

For example, if the set of labels in the first sequence is {'A', 'B', 'C'}, and the set of labels in the second sequence is {'B', 'C', 'D'}. Then before concatenation, we should rename them to something like {'1_A', '1_B', '1_C'} and {'2_B', '2_C', '2_D'}, unless 'B' and 'C' in the two sequences are meaningfully identical (in speaker diarization, this means they are the same speakers across utterances). This part will be automatically taken care of by the argument --enforce_cluster_id_uniqueness for the previous section.

The reason we concatenate all training sequences is that, we will be resampling and block-wise shuffling the training data as a data augmentation process, such that we result in a robust model even when there is insufficient number of training sequences.

Training on large datasets

For large datasets, the data usually could not be loaded into memory at once. In such cases, the fit() function needs to be called multiple times.

Here we provide a few guidelines as our suggestions:

  1. Do not feed different datasets into different calls of fit(). Instead, for each call of fit(), the input should cover sequences from different datasets.
  2. For each call to the fit() function, make the size of input roughly the same. And, don't make the input size too small.

Prediction

Once we are done with training, we can run the trained model to perform inference on new sequences by calling the predict() function:

predicted_cluster_ids = model.predict(test_sequences, args)

Here test_sequences should be a list of 2-dim numpy arrays of type float, corresponding to the observation sequences for testing.

The returned predicted_cluster_ids is a list of the same size as test_sequences. Each element of predicted_cluster_ids is a list of integers, with the same length as the corresponding test sequence.

You can also use a single test sequence for test_sequences. Then the returned predicted_cluster_ids will also be a single list of integers.

The definitions of the args are described in uisrnn/arguments.py. See inference_parser.

Citations

Our paper is cited as:

@inproceedings{zhang2019fully,
  title={Fully supervised speaker diarization},
  author={Zhang, Aonan and Wang, Quan and Zhu, Zhenyao and Paisley, John and Wang, Chong},
  booktitle={International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={6301--6305},
  year={2019},
  organization={IEEE}
}

References

Baseline diarization system

To learn more about our baseline diarization system based on unsupervised clustering algorithms, check out this site.

A Python re-implementation of the spectral clustering algorithm used in this paper is available here.

The ground truth labels for the NIST SRE 2000 dataset (Disk6 and Disk8) can be found here.

For more public resources on speaker diarization, check out awesome-diarization.

Speaker recognizer/encoder

To learn more about our speaker embedding system, check out this site.

We are aware of several third-party implementations of this work:

Please use your own judgement to decide whether you want to use these implementations.

We are NOT responsible for the correctness of any third-party implementations.

Variants

Here we list the repositories that are based on UIS-RNN, but integrated with other technologies or added some improvements.

Link Description
taylorlu/Speaker-Diarization GitHub stars Speaker diarization using UIS-RNN and GhostVLAD. An easier way to support openset speakers.
DonkeyShot21/uis-rnn-sml GitHub stars A variant of UIS-RNN, for the paper Supervised Online Diarization with Sample Mean Loss for Multi-Domain Data.
Comments
  • Model predicts new cluster for each input after calling load()

    Model predicts new cluster for each input after calling load()

    Hi,

    I've loaded the saved model trained on the custom dataset using model.load. When I'm predicting test set with model.predict, Instead of labels, I'm getting the sequence of numbers, which looks like it starts from the length of sequences passed to predict. Following is a screenshot for your reference.

    image

    Thank you in advance.

    bug question 
    opened by dalonlobo 15
  • [Bug] Incorrect estimation of transition_bias

    [Bug] Incorrect estimation of transition_bias

    The way you estimate the transition_bias parameter seems illogic and incorrect to me. The parameter is calculated like this in uisrnn.utils.resize_sequence:

    transit_num = 0
    for entry in range(len(cluster_id) - 1):
      transit_num += (cluster_id[entry] != cluster_id[entry + 1])
    bias_denominator = len(cluster_id)
    bias = (transit_num + 1) / bias_denominator
    

    So you look at how many speaker changes there are in the concatenated sequence and divide it by the number of segments. Apart from the fact that you should divide by the number of transitions between segments (len(cluster_id) - 1) and not by the number of segments, it seems to me that your code also counts as speaker change the transition between speakers that belong to different sequences. This error arises from the fact that you concatenate the sequences and later calculate transition_bias, while, actually, you should do the opposite, I reckon.

    To make it clearer, here is an example. Given: seq1 = spk1 | spk2 | spk3 seq2 = spk2 | spk3| spk3

    How it should be: n_speaker_changes = 3 n_transitions = 4 transition_bias = 3/4

    Your calculation: concatenated_sequence = spk1 | spk2 | spk3 | spk2 | spk3| spk3 n_speaker_changes = 4 n_segments = 6 transition_bias = 2/3

    This is not only a problem with toy examples. Imagine you have 1000 short sequences as train data, at the numerator you add a very big number n_sequences - 1 = 999. This can, in my opinion, hurt performance in some particular cases.

    So the estimation you provide in the code is different from the one in the paper. Not considering enforce_cluster_id_uniqueness your estimation is something like: (speaker_changes + n_sequences - 1) / n_segments According to the paper, it should look more like: spaker_changes / (n_segments - n_sequences)

    Also, there is no constraint on when you calculate it. It can be computed before calling fit_concatenated in the fit function. In the unlikely case that fit_concatenated is called straight from the user there is no way to calculate it, then maybe there could be a check to require the user to calculate transition_bias beforehand.

    Sorry if I misunderstood something or got it all wrong. I'm here to help.

    P.S.1: why don't you estimate crp_alpha the same way you do with transition_bias? Just because it is not possible in fit_concatenated? Estimating both parameters in the fit function seems reasonable. P.S.2: It feels a bit inelegant and misleading to calculate transition_bias inside a function that is called resize_sequence

    bug 
    opened by DonkeyShot21 13
  • Batch prediction? - or allow prediction using multiprocessing

    Batch prediction? - or allow prediction using multiprocessing

    Describe the question

    Documentation states that one can only apply prediction one sequence at a time.

    On my machine (with GPU), it takes more than 10s to process one sequence with 100 samples. Would be nice to support batch prediction to make processing large collection of sequences faster. Beam search probably makes it impossible though.

    Anyway, thanks for open-sourcing this implementation. This is really appreciated!

    My background

    Have I read the README.md file?

    • yes

    Have I searched for similar questions from closed issues?

    • yes

    Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

    • yes

    Have I tried to find the answers in the reference Speaker Diarization with LSTM?

    • not applicable

    Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

    • not applicable
    enhancement question API 
    opened by hbredin 12
  • run_test.sh problem

    run_test.sh problem

    Hi, I run this demo.py twice. The first time it works well,and its accurancy is 1.But when i delete the model and try again it works but its result is only about 0.8.I'm sure i didn't change the program.I have tried to deleted the whole program and git clone it again. Its result is still about 0.8.Then I run run_test.sh and got a error,

    ====================================================================== FAIL: test_four_clusters (main.TestIntegration) Four clusters on vertices of a square.

    Traceback (most recent call last): File "tests/integration_test.py", line 99, in test_four_clusters self.assertEqual(1.0, accuracy) AssertionError: 1.0 != 0.9


    Ran 1 test in 17.543s

    FAILED (failures=1)

    There must be something strange happens.Could anyone tell me why could lead to this happen? Thanks.

    bug 
    opened by 77281900000 11
  • [Question] Are input d-vectors for training assumed L2-normalized?

    [Question] Are input d-vectors for training assumed L2-normalized?

    Are input d-vectors for training assumed L2-normalized?

    In Generalized End-to-End Loss for Speaker Verification they are defined as L2-normalized in eq. 4.

    In sample toy_training_data.npz, they are also L2-normalized:

    cd uis-rnn
    python3 -c 'import numpy; train_data = numpy.load("./data/toy_training_data.npz", allow_pickle=True); print((tr
    ain_data["train_sequence"] ** 2).sum(axis = 1))'
    # [1. 1. 1. ... 1. 1. 1.]
    

    But eq. 11 from Fully Supervised Speaker Diarization models segment speaker embeddings as normally-distributed vectors and does not assume unit-length explicitly (if it did, maybe a von Mises–Fisher distribution would be a better distribution).

    Thank you!

    question 
    opened by vadimkantorov 8
  • add crp_alpha support

    add crp_alpha support

    I've noticed that the training args accepts a given value of crp_alpha, and there were issues about adding support for estimation of crp_alpha.

    I've added a script, which accepts the train_sequence and train_cluster_id loaded from './data/toy_training_data.npz', iterate through a searching range, and gives the best crp_alpha value estimated from training data.

    The script is pretty simply actually. The steps were as follows:

    1. Iterate through a range of alpha
    2. Iterate through all training samples
    3. Calculate p(y|z) for each sample
    4. Return the alpha with highest p(y|z)

    I hope this script will help some people.

    enhancement 
    opened by aluminumbox 8
  • about the training loss and the batch size

    about the training loss and the batch size

    I want to know whether the loss below is normal or not,I set the batchsize=10 then ,no matter how I change dataset, the converge loss is about 900。 image

    My background

    Have I read the README.md file?

    • yes

    Have I searched for similar questions from closed issues?

    • yes

    Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

    • yes

    Have I tried to find the answers in the reference Speaker Diarization with LSTM?

    • yes

    Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

    • yes
    question 
    opened by simpleishappy 8
  • [Question]About UIS-RNN d-vector

    [Question]About UIS-RNN d-vector

    Describe the question

    Hi, I have been working on this issue for almost a month. I finally manage to get a good EER on the training of LSTM, and now training on UIS-RNN.

    I have a question about the d-vector.

    so here the specification for your training is: sampling rate: 16K for mel-transform: [25ms window, 10ms hop length] training LSTM [140-180 frame], lets say we fix to 160 frames training UIS-RNN: 400ms segment-level d-vector

    so here 25ms audio will become 1 frame of mel-spectrum, 160 frame is roughly 1.6s(with hop) means for training the LSTM, we are acutally feeding 1.6s audio into LSTM.

    However, for UIS-RNN, you mention we need to use VAD to segment the audio into maximum 400ms segments.

    so a 400ms segmented audio after [25ms window, 10ms hop length] mel-transform will only gives you around 40 frames, and how can we generate multiple d-vectors from this 40 frame data in order to get a segment-level d-vector? (because you mention we need to L2 norm and average them).

    I really appreciate that you can give me some instruction on this point.

    My background

    Have I read the README.md file?

    • yes

    Have I searched for similar questions from closed issues?

    • yes

    Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

    • yes, I read the paper again and again

    Have I tried to find the answers in the reference Speaker Diarization with LSTM?

    • yes, I read the paper again and again

    Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

    • yes, I read the paper again and again
    question 
    opened by BarCodeReader 7
  • Is the GRU really needed to predict mu_t ?

    Is the GRU really needed to predict mu_t ?

    I spent some time trying to figure out what the GRU really does. My understanding is that it is used to estimate the running mean (mu_t in the paper) of each cluster.

    I can see the benefit of a RNN for this (it can learn to not take some noisy samples into account) but I am wondering whether you had the chance to compare to an actual running mean.

    question 
    opened by hbredin 7
  • The clustering performance influenced by overlap window size

    The clustering performance influenced by overlap window size

    @wq2012 The overlap rate seems strongly influenced the number of speakers. Since when overlap size is larger, the speaker embedding will change more smoothly, and the change points will hard to detect, it's apt to generate fewer speakers. And the size of sliding window also affects a lot, although this problem is caused by the speaker embedding algorithm. This is my project integrates with the vgg-speaker-recognition algorithm: Speaker-Diarization Thanks a lot.

    question 
    opened by taylorlu 7
  • corrected transition_bias estimation, fixes #55

    corrected transition_bias estimation, fixes #55

    This pull request fixes issue #55.

    What is in this pull request:

    • new function uisrnn.utils.estimate_transition_bias that calculates transition_bias correctly
    • removed transition_bias estimation snippet from uisrnn.utils.resize_sequence. Now resize_sequence only retuns sub_sequences and seq_lengths
    • moved transition_bias update (for multiple calls to fit) from fit_concatenated to fit
    • added warning message if train_sequences is already concatenated and transition_bias has not been passed as argument, saying that the estimation can be incorrect
    • edited few test scripts to adapt to the new shape of the data returned from uisrnn.utils.resize_sequence

    I also wanted to add _ to fit_concatenated to make it private so that the user is discouraged to call it, but I didn't because maybe it has some other dependency to some internal code of yours.

    opened by DonkeyShot21 6
  • assign gpu with arguments

    assign gpu with arguments

    Describe the question

    if i enable gpu args.enable_cuda=True by default it use 'cuda:0' but i have multiple gpu on my pc so i think we can make an argument or environment variable CUDA_VISIBLE_DEVICES to chose manually.

    My background

    Have I read the README.md file?

    • yes

    Have I searched for similar questions from closed issues?

    • yes

    Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

    • yes

    Have I tried to find the answers in the reference Speaker Diarization with LSTM?

    • yes

    Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

    • yes
    question 
    opened by pthavarasa 0
  • [Bug] Making a prediction on CPU after training on GPU

    [Bug] Making a prediction on CPU after training on GPU

    Describe the bug

    Making a prediction on CPU after training on GPU, getting running time error.

    To Reproduce

    Train UIS-RNN on GPU Predict on CPU only machine

    Commands and arguments

    Default arguments.

    Logs

    Traceback (most recent call last):
      File ".\speakerDiarization.py", line 207, in <module>
        main(r'wavs/rmdmy.wav', embedding_per_second=1.2, overlap_rate=0.4)
      File ".\speakerDiarization.py", line 158, in main
        uisrnnModel.load(SAVED_MODEL_NAME)
      File "C:\Users\Prasanth\Desktop\VScode\stage\Speaker-Diarization\uisrnn\uisrnn.py", line 151, in load
        var_dict = torch.load(filepath)
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 607, in load
        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 882, in _load
        result = unpickler.load()
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 857, in persistent_load
        load_tensor(data_type, size, key, _maybe_decode_ascii(location))
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 846, in load_tensor
        loaded_storages[key] = restore_location(storage, location)
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
        result = fn(storage, location)
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
        device = validate_cuda_device(location)
      File "C:\Users\Prasanth\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
        raise RuntimeError('Attempting to deserialize object on a CUDA '
    RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
    

    Versions

    • uis-rnn git HEAD: 0.1.0
    • numpy: 1.18.5
    • scipy: 1.7.3
    • torch: 1.10.2
    bug 
    opened by pthavarasa 0
  • Any documentations on training from scratch using custom data in other languages ?

    Any documentations on training from scratch using custom data in other languages ?

    I have a speaker diarization dataset in Vietnamese, where, in every audio file, segments of speakers are already annotated. How should I prepare and process data to be able to train UIS-RNN on my custom data ?

    question 
    opened by thangld201 1
  • Question about custom data generator

    Question about custom data generator

    I see that previously you answered that "for big amount of data you can fit model several times"(#8). But I didn't work with pytorch before and don't know how it is must work: how pass info about losses and gradients for different parts of dataset. That's why I want to ask if your library has ability to fit with custom data generator (like fit_generator in keras). Or maybe you can tell me where I can see example for such case.

    This is what my class for data looks like(prevoiusly I save different parts of data in "data.npz"):

    from torch.utils.data import Dataset
    class Data(Dataset):
        def __init__(self, set, batch_size=32, shuffle=True):
            self.data = np.load("data.npz", allow_pickle=True)
            self.set = set
            self.batch_size = batch_size
            self.shuffle = shuffle
            self.on_epoch_end()
    
        def __len__(self):
            return int(np.floor(len(self.indexes) / self.batch_size))
    
        def __getitem__(self, index):
            temp_indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]
    
            sequences_batch, clusters_batch = self.__data_generation(temp_indexes)
    
            return sequences_batch, clusters_batch
    
        def on_epoch_end(self):
            self.indexes = np.arange(self.data[f'{self.set}_sequence'].shape[0])
            if self.shuffle:
                np.random.shuffle(self.indexes)
    
        def __data_generation(self, temp_indexes):
            sequences = self.data[f'{self.set}_sequence'][temp_indexes]
            clusters = self.data[f'{self.set}_cluster_id'][temp_indexes]
            sequences = [seq.astype(float) + 0.00001 for seq in sequences]
            clusters = [np.array(cid).astype(str) for cid in clusters]
    
            return sequences, clusters
    

    And this is how I create generator:

    train_set = Data("train", 32, True)
    train_generator = DataLoader(train_set)
    

    P.S.: I'll be happy to receive any help, because I don't even sure that I go in the right direction.

    question 
    opened by hontiky 0
  • [Bug] Predict method does not finish

    [Bug] Predict method does not finish

    Describe the bug

    After training the UIS-RNN algorithm, I call the predict() method with the test_sequences. In this case, the test sequence is a 2-d numpy array, not a list. Somehow the method gets stuck and never finishes. I checked the input shape and according to the method doc it should work fine.

    Maybe I am doing something wrong?

    To Reproduce

    Commands and arguments

    import numpy as np
    import uisrnn
    
    SAVED_MODEL_NAME = 'saved_model.uisrnn'
    
    
    model_args, training_args, inference_args = uisrnn.parse_arguments()
    model = uisrnn.UISRNN(model_args)
    model.load(SAVED_MODEL_NAME)
    test_sequence = np.load('./data/test_sequence.npy', allow_pickle=True)
    predicted_cluster_id = model.predict(test_sequence, inference_args)
    

    Logs

    Nothing printed.

    Serialized models

    https://drive.google.com/open?id=1N4JwQQv27Xap-UlvHCNARTiKGT3356AX

    Data samples

    Data
    https://www.kaggle.com/mfekadu/darpa-timit-acousticphonetic-continuous-speech
    Embeddings
    https://drive.google.com/open?id=1N4JwQQv27Xap-UlvHCNARTiKGT3356AX
    
    

    Versions

    • uis-rnn: 0.1.0
    • numpy: 1.18.0
    • scipy: 1.4.1
    • torch: 1.3.1

    Additional context

    Debugging the code, I see that everything goes well until this point: https://github.com/google/uis-rnn/blob/cb3a9c6764b3ce40e1327d524eea5e1568e884b7/uisrnn/uisrnn.py#L549 The statement after the for loop is finished is not hit. Using this implementation to generate the embeddings: https://github.com/HarryVolek/PyTorch_Speaker_Verification

    bug 
    opened by ArlindKadra 3
  • uis-rnn can't work for long utterances dataset?

    uis-rnn can't work for long utterances dataset?

    Describe the question

    In Diarization task, i train on AMI train-dev set and ICSI corpus , i test on AMI test set. Both datasets include audios of 3-5 speakers in 50-70 minutes. My d embedding trains on Voxceleb1,2 with EER = 4.55%. I train uirnn with window size .24ms, overlap 50%, segment size .4ms. The result is poor on both train and test set. I also read all your code about uirnn, i don't understand 1> why do you split up the original utterances and concatenate them by speaker and then use that input for training? 2> Why doese the input ignore which audio the utterance belongs to, just merge all utterances in 1 single audio? .This process seems completely different to inference process and also reduce the capacity of using batch size if one speaker talk too much. For 1 hour audio, the output has 20-30 speakers instead of 3-5 speakers no matter the smaller of crp_alpha is.

    My background

    Have I read the README.md file?

    • yes

    Have I searched for similar questions from closed issues?

    • yes

    Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?

    • yes

    Have I tried to find the answers in the reference Speaker Diarization with LSTM?

    • yes

    Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?

    • yes
    question 
    opened by wrongbattery 19
Owner
Google
Google ❤️ Open Source
Google
This repo contains simple to use, pretrained/training-less models for speaker diarization.

PyDiar This repo contains simple to use, pretrained/training-less models for speaker diarization. Supported Models Binary Key Speaker Modeling Based o

null 12 Jan 20, 2022
Mirco Ravanelli 2.3k Dec 27, 2022
Simplified diarization pipeline using some pretrained models - audio file to diarized segments in a few lines of code

simple_diarizer Simplified diarization pipeline using some pretrained models. Made to be a simple as possible to go from an input audio file to diariz

Chau 65 Dec 30, 2022
neural network based speaker embedder

Content What is deepaudio-speaker? Installation Get Started Model Architecture How to contribute to deepaudio-speaker? Acknowledge What is deepaudio-s

null 20 Dec 29, 2022
End-2-end speech synthesis with recurrent neural networks

Introduction New: Interactive demo using Google Colaboratory can be found here TTS-Cube is an end-2-end speech synthesis system that provides a full p

Tiberiu Boros 214 Dec 7, 2022
A Japanese tokenizer based on recurrent neural networks

Nagisa is a python module for Japanese word segmentation/POS-tagging. It is designed to be a simple and easy-to-use tool. This tool has the following

null 325 Jan 5, 2023
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Jan 2, 2023
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.6k Feb 18, 2021
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
Code of paper: A Recurrent Vision-and-Language BERT for Navigation

Recurrent VLN-BERT Code of the Recurrent-VLN-BERT paper: A Recurrent Vision-and-Language BERT for Navigation Yicong Hong, Qi Wu, Yuankai Qi, Cristian

YicongHong 109 Dec 21, 2022
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.09

Keon Lee 142 Jan 6, 2023
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS)

This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my thesis if you're curious or if you're looking for info I haven't documented. Mostly I would recommend giving a quick look to the figures beyond the introduction.

Corentin Jemine 38.5k Jan 3, 2023
TalkNet: Audio-visual active speaker detection Model

Is someone talking? TalkNet: Audio-visual active speaker detection Model This repository contains the code for our ACM MM 2021 paper, TalkNet, an acti

null 142 Dec 14, 2022
Original implementation of the pooling method introduced in "Speaker embeddings by modeling channel-wise correlations"

Speaker-Embeddings-Correlation-Pooling This is the original implementation of the pooling method introduced in "Speaker embeddings by modeling channel

Themos Stafylakis 10 Apr 30, 2022
Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning

Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning English | 中文 ❗ Now we provide inferencing code and pre-training models

null 164 Jan 2, 2023
A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk.

Simple-Vosk A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk. Check out the official Vosk G

null 2 Jun 19, 2022
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 2, 2022
Named-entity recognition using neural networks. Easy-to-use and state-of-the-art results.

NeuroNER NeuroNER is a program that performs named-entity recognition (NER). Website: neuroner.com. This page gives step-by-step instructions to insta

Franck Dernoncourt 1.6k Dec 27, 2022
Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

Ubiquitous Knowledge Processing Lab 748 Jan 6, 2023