tts-dataset-prompts
This repository aims to be a decent set of sentences for people looking to clone their own voices (e.g. using Tacotron 2).
Each set of 50 lines aims to fulfill the following criteria:
- each phoneme is represented at least once, according to CMUdict (differently-stressed versions of vowels count as separate phonemes; consonants need to be present twice)
- each phoneme is roughly as frequent as in regular speech (between 50% and 150% the frequency present in Moby Dick, unless the phoneme is only present 4 or fewer times in the batch)
- every line is of roughly equal length when spoken (14-18 syllables + non-final punctuation)
- words with context-dependent pronunciations (except very common ones, such as
the
) are avoided for ease of processing - at least 10 lines contain commas
- at least 10 lines are made up of multiple shorter sentences (so that the AI learns to pause naturally)
Additional text files will be provided for question and exclamation prompts, following the same rules. They have been separated because some text-to-speech architectures deal poorly with ending punctuation that affects the intonation of the whole sentence. It may be beneficial to use these to train a separate model, as recommended by TALQu and as done for some voices in the Mekatron service (defunct).
This repo uses the g2p-en library to determine phoneme counts, in order to match Uberduck's phonetization.
Other good prompt sets
- Microsoft CustomVoice example scripts (multilingual) (not all of the prompt lists are well designed, e.g. the en-US chat prompts only include /ʒ/ as part of the word "Indonesia")
- CMU Arctic prompt list (phonetically balanced, but only one sentence per line)
- MOCHA-TIMIT ("designed to include the main connected speech processes in English (eg. assimilations, weak forms ..)")
- LJSpeech transcript (sentence fragments abound, which I think of as useful)
- Harvard sentences (phonetically balanced, but only one sentence per line and they're all equal length)