SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

Overview

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frederico Santos de Oliveira, Arnaldo Candido Junior, Anderson da Silva Soares, Sandra Maria Aluisio, Moacir Antonelli Ponti

In our recent paper we propose SC-GlowTTS: an efficient zero-shot multi-speaker text-to-speech model that improves similarity for speakers unseen in training. We propose a speaker conditional architecture that explores a flow-based decoder that can work in a zero-shot scenario. As text encoders, we explored a dilated residual convolutional-based encoder, gated convolutional-based encoder, and transformer-based encoder. Additionally, we have shown that adjusting a GAN-based vocoder for the spectrograms predicted by the TTS model on the training dataset can significantly improve the similarity and speech quality for new speakers. We showed that our model can converge in training, using only 11 speakers, reaching state-of-the-art results for similarity with new speakers and speech quality.

Audios samples

Visit our website for audio samples.

Implementation

All of our experiments were implemented at Coqui TTS.

Checkpoints

Model URL
Speaker Encoder by @mueller91 link
Tacotron 2 link
SC-GlowTTS-Trans link
SC-GlowTTS-Res link
SC-GlowTTS-Gated link
SC-GlowTTS-Trans 11 speakers link
HiFi-GAN link
All checkpoints link

Colab demos

SC-GlowTTS-Trans

SC-GlowTTS-Res

SC-GlowTTS-Gated

SC-GlowTTS-Trans trained with 11 speakers

Preprocessed datasets

VCTK Removed Silences

MOS details

MOS Sentences

MOS samples

You might also like...
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) Paper | Demo Requirements Python = 3.6 , Pytorch

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech Keon Lee, Ky

African language Speech Recognition - Speech-to-Text

Swahili-Speech-To-Text Table of Contents Swahili-Speech-To-Text Overview Scenario Approach Project Structure data: models: notebooks: scripts tests: l

[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

code for CVPR paper Zero-shot Instance Segmentation

Code for CVPR2021 paper Zero-shot Instance Segmentation Code requirements python: python3.7 nvidia GPU pytorch1.1.0 GCC =5.4 NCCL 2 the other python

Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Codes for ACL-IJCNLP 2021 Paper
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

PyTorch implementation of 1712.06087
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

Comments
  • Training the model

    Training the model

    Hello! I am trying to train SC-GlowTTS models and can not figure out the proper way to do this. I tried launching train_glow_tts.py script from coqui-ai/TTS with config and commit hash from colab notebooks, but this generates errors and I am not sure this is a correct way to do it. I also tried using configs from coqui with the latest version but this does not work either. Could you please explain, what are the intended steps to reproduce your experiments? :)

    opened by The0nix 3
  • Wav inference doesn't work on Colab

    Wav inference doesn't work on Colab

    Hi, I've been studying this paper, and most of your work, I find it very insightful and promising. The problem I have is that when I try to run the Google Colab Example: SC-GlowTTS-Trans+HiFi-GAN-FT I get an error in the row:

    _, wav = tts(model, TEXT, C, USE_CUDA, ap, use_griffin_lim, None, speaker_embedding=speaker_embedding)

    The error is: Error in dlopen for library libnvrtc.so.10.2and libnvrtc-08c4863f.so.10.2. I've been trying to solve it for a while, but haven't found an answer. It maybe is related to the Cuda drivers on Colab. Do you know how to solve it?

    Thanks for everything.

    opened by jlmarrugom 1
  • Speaker embeddings and params.

    Speaker embeddings and params.

    Hi,

    Thank you for providing demo code! It works really well on the given test samples. However, when I try to use a sample from a custom dataset, the speaker embedding are not able to capture the features of the speaker. If I try to generate speech from a female voice reference, the generated voice is male.

    I calculated the cosine similarities between reference and custom dataset speaker embeddings and here is what I found Male - Female : 0.216 Male - Female custom : 0.028 Male - Male custom : -0.015 Female - Female custom: -0.02

    Can you please let me know if there are any specific preprocessing steps that I to follow ? the reference audio clips are Mono channel, 16KHz and clips of 10 seconds.

    Thanks again for your work! Really appreciate it!

    opened by DwaraknathT 1
Owner
Edresson Casanova
Computer Science PhD Student
Edresson Casanova
ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.

ERISHA: Multilingual Multispeaker Expressive Text-to-Speech Library ERISHA is a multilingual multispeaker expressive speech synthesis framework. It ca

Ajinkya Kulkarni 43 Nov 27, 2022
Official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch.

Multi-speaker DGP This repository provides official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch. O

sarulab-speech 24 Sep 7, 2022
ZeroGen: Efficient Zero-shot Learning via Dataset Generation

ZEROGEN This repository contains the code for our paper “ZeroGen: Efficient Zero

Jiacheng Ye 31 Dec 30, 2022
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

null 28 Aug 29, 2022
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

VQGAN-CLIP-Docker About Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized This is a stripped and minimal dependency repository for running loca

Kevin Costa 73 Sep 11, 2022
Code repo for EMNLP21 paper "Zero-Shot Information Extraction as a Unified Text-to-Triple Translation"

Zero-Shot Information Extraction as a Unified Text-to-Triple Translation Source code repo for paper Zero-Shot Information Extraction as a Unified Text

cgraywang 88 Dec 31, 2022
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

null 6 Dec 1, 2021