FG-transformer-TTS Fine-grained style control in transformer-based text-to-speech synthesis

Overview

LST-TTS

Official implementation for the paper Fine-grained style control in transformer-based text-to-speech synthesis. Submitted to ICASSP 2022. Audio samples/demo for our system can be accessed here

Setting up submodules

git submodule update --init --recursive

Get the waveglow vocoder checkpoint from here (This is from the NVIDIA official WaveGlow repo).

Setup environment

See docker/Dockerfile for the packages need to be installed.

Dataset preprocessing

LJSpeech

python preprocess_LJSpeech.py --datadir LJSpeechDir --outputdir OutputDir

VCTK

Get the leading and trailing scilence marks from this repo, and put vctk-silences.0.92.txt in your VCTK dataset directory.

python preprocess_VCTK.py --datadir VCTKDir --outputdir Output_Train_Dir
python preprocess_VCTK.py --datadir VCTKDir --outputdir Output_Test_Dir --make_test_set
  • --make_test_set: specify this flag to process the speakers in the test set, otherwise only process training speakers.

Training

LJSpeech

python train_TTS.py --precision 16 \
                    --datadir FeatureDir \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH \
                    --sampledir SampleDir \
                    --batch_size 128 \
                    --check_val_every_n_epoch 50 \
                    --use_guided_attn \
                    --training_step 250000 \
                    --n_guided_steps 250000 \
                    --saving_path Output_CKPT_DIR \
                    --datatype LJSpeech \
                    [--distributed]
  • --distributed: enable DDP multi-GPU training
  • --batch_size: batch size per GPU, scale down if you train with multi-GPU and want to keep the same batch size
  • --check_val_every_n_epoch: sample and validate every n epoch
  • --datadir: output directory of the preprocess scripts

VCTK

python train_TTS.py --precision 16 \
                    --datadir FeatureDir \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH \
                    --sampledir SampleDir \
                    --batch_size 64 \
                    --check_val_every_n_epoch 50 \
                    --use_guided_attn \
                    --training_step 150000 \
                    --n_guided_steps 150000 \
                    --etts_checkpoint LJSpeech_Model_CKPT \
                    --saving_path Output_CKPT_DIR \
                    --datatype VCTK \
                    [--distributed]
  • --etts_checkpoint: the checkpoint path of pretrained model (on LJ Speech)

Synthesis

We provide examples for synthesis of the system in synthesis.py, you can adjust this script to your own usage. Example to run synthesis.py:

python synthesis.py --etts_checkpoint VCTK_Model_CKPT \
                    --sampledir SampleDir \
                    --datatype VCTK \
                    --vocoder_ckpt_path WaveGlowCKPT_PATH
Comments
  • Some questions

    Some questions

    Hi author, nice implementation! I wonder do we need some pre-trained models for self.emo_model = MinimalClassifier()? Besides, I get audio samples with speaker identity inconsistent with reference audio.

    opened by Rongjiehuang 4
  • RuntimeError occurs when training

    RuntimeError occurs when training

    Hi, really nice work.

    I tried to use "wav2vec2-large-xlsr-53" as encoder, then made a pretreatment.

    but, It appeared during training RuntimeError: mat1 and mat2 shapes cannot be multiplied

    I wanna know if I can change the WAV2VEC2 encoder? thank you.

    opened by xuanhan863 3
  • About Style embedding

    About Style embedding

    Hi @b04901014 , thanks for your great implementation!

    I have a question about code and paper. In the Style embedding section of the paper

    Similarly, the output features from wav2vec 2.0 are also processed by a LSTM for fine-grained style embeddings. However, instead of taking the mean across the time dimension, we adopt average pooling with stride 4 and kernel size 8 to smooth out the representation. Based on this representation, each time steps will be fed as a query to a multi-head attention with another trainable codebook as key and value to produce a sequence of style embeddings

    I think the output of LSTM should be pooled on average before going to MHA, but in the code the order is reversed https://github.com/b04901014/FG-transformer-TTS/blob/d0362cc8530ebe0744ad3104bf1da8145f5b1aec/ETTS/ettstransformer.py#L68-L73

    opened by guoaoo 2
  • What is the resource to run this project

    What is the resource to run this project

    Thanks for the good job. I am trying to reproducing the result on LJSpeech dataset. I have two GPU card with atmost 10G GPU memory left each. The training process run to epoch7 and crashed because of "Out of Memory"; I had tried to cut the batchsize to harf: 64, that makes no sense. I did not change the model hyper params to prevent the model degeneration. So what should I do to run it up?
    Or is there a list of limitations about resources to run the training process up?

    opened by JohnHerry 0
  • What should we do to adjust the model on other language

    What should we do to adjust the model on other language

    Hi, We are trying the single speaker instance. We had tried to train the model on LJSpeech, It seems the Local Style reference audio in deed effectively affect the prosody of the synthesized speech, but when we use BZNSYP, a Mandarin dataset, the result model have no ability to transfer speak style from reference audio to the synthesised one. and, the "model.synthesize_with_sample()" who use random data as LST, will just product chaos sound of the speaker, I am not sure is it because that the model's LST had speech content leakage in it. Then how to adjust the model parameter to be used in another language? By the way. We are using the wav2vec2-LARGE instead of wav2vec2-BASE, and emo-dim = 1024

    opened by JohnHerry 0
  • About the frame rate of  preprocessing

    About the frame rate of preprocessing

    I found that TransformerTTS is triained and verified on fr=22050 audios, while as wav2vec2.0 needed, the GST and LST are with input audio of fr=16000. why? Is it will be better when with the same frame rate? I see no explaination in the paper.

    opened by JohnHerry 0
Owner
Li-Wei Chen
Li-Wei Chen
The official implementation of VAENAR-TTS, a VAE based non-autoregressive TTS model.

VAENAR-TTS This repo contains code accompanying the paper "VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis". Sa

THUHCSI 138 Oct 28, 2022
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 3, 2023
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech Keon Lee, Ky

Keon Lee 114 Dec 12, 2022
FIRA: Fine-Grained Graph-Based Code Change Representation for Automated Commit Message Generation

FIRA is a learning-based commit message generation approach, which first represents code changes via fine-grained graphs and then learns to generate commit messages automatically.

Van 21 Dec 30, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Chinese Mandarin tts text-to-speech 中文 (普通话) 语音 合成 , by fastspeech 2 , implemented in pytorch, using waveglow as vocoder,

Chinese mandarin text to speech based on Fastspeech2 and Unet This is a modification and adpation of fastspeech2 to mandrin(普通话). Many modifications t

null 291 Jan 2, 2023
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
PyTorch Implementation of DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

DiffGAN-TTS - PyTorch Implementation PyTorch implementation of DiffGAN-TTS: High

Keon Lee 157 Jan 1, 2023
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 9, 2023
WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose

WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose Yijun Zhou and James Gregson - BMVC2020 Abstract: We present an end-to-end head-pos

null 368 Dec 26, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., deep learning) domain.

Jie Huang 14 Oct 21, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding ?? This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

null 55 Dec 21, 2022
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
Code release for The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification (TIP 2020)

The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification Code release for The Devil is in the Channels: Mutual-Channel

PRIS-CV: Computer Vision Group 230 Dec 31, 2022
Code for Talk-to-Edit (ICCV2021). Paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog.

Talk-to-Edit (ICCV2021) This repository contains the implementation of the following paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog Yumin

Yuming Jiang 221 Jan 7, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

null 77 Dec 27, 2022
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 90 Dec 31, 2022
PyTorch implementation of Weak-shot Fine-grained Classification via Similarity Transfer

SimTrans-Weak-Shot-Classification This repository contains the official PyTorch implementation of the following paper: Weak-shot Fine-grained Classifi

BCMI 60 Dec 2, 2022