text to speech toolkit. 好用的中文语音合成工具箱,包含语音编码器、语音合成器、声码器和可视化模块。

Overview

ttskit

ttskit

Text To Speech Toolkit: 语音合成工具箱。

安装

pip install -U ttskit
  • 注意
    • 可能需另外安装的依赖包:torch,版本要求torch>=1.6.0,<=1.7.1,根据自己的实际环境安装合适cuda或cpu版本的torch。
    • ttskit的默认音频采样率为22.5k。
    • 自行设置环境变量CUDA_VISIBLE_DEVICES以调用GPU,如果不设置,则默认调用0号GPU,没有GPU则使用CPU。
    • 默认用mspk模式的多发音人的语音合成模型,griffinlim声码器。

资源

使用ttskit的过程中会自动下载模型和语音资源。

如果下载太慢或无法下载,也可自行从百度网盘下载,把下载的资源合并到ttskit目录下(更新resource目录)。

链接:https://pan.baidu.com/s/13RPGNEKrCX3fgiGl7P5bpw

提取码:b7hw

快速使用

import ttskit

ttskit.tts('这是个示例', audio='14')

# 参数介绍
'''语音合成函数式SDK接口,函数参数全部为字符串格式。
text为待合成的文本。
speaker为发音人名称,可选名称为_reference_audio_dict;默认的发音人名称列表见resource/reference_audio/__init__.py。
audio为发音人参考音频,如果是数字,则调用内置的语音作为发音人参考音频;如果是语音路径,则调用audio路径的语音作为发音人参考音频。
注意:如果用speaker来选择发音人,请把audio设置为下划线【_】。
output为输出,如果以.wav结尾,则为保存语音文件的路径;如果以play开头,则合成语音后自动播放语音。
'''

版本

v0.2.1

sdk_api

语音合成SDK接口。 本地函数式地调用语音合成。

  • 简单使用
from ttskit import sdk_api

wav = sdk_api.tts_sdk('文本', audio='24')

cli_api

语音合成命令行接口。 用命令行调用语音合成。

  • 简单使用
from ttskit import cli_api

args = cli_api.parse_args()
cli_api.tts_cli(args)
# 命令行交互模式使用语音合成。
  • 命令行
tkcli

usage: tkcli [-h] [-i INTERACTION] [-t TEXT] [-s SPEAKER] [-a AUDIO]
             [-o OUTPUT] [-m MELLOTRON_PATH] [-w WAVEGLOW_PATH] [-g GE2E_PATH]
             [--mellotron_hparams_path MELLOTRON_HPARAMS_PATH]
             [--waveglow_kwargs_json WAVEGLOW_KWARGS_JSON]

语音合成命令行。

optional arguments:
  -h, --help            show this help message and exit
  -i INTERACTION, --interaction INTERACTION
                        是否交互,如果1则交互,如果0则不交互。交互模式下:如果不输入文本或发音人,则为随机。如果输入文本为exit
                        ,则退出。
  -t TEXT, --text TEXT  Input text content
  -s SPEAKER, --speaker SPEAKER
                        Input speaker name
  -a AUDIO, --audio AUDIO
                        Input audio path or audio index
  -o OUTPUT, --output OUTPUT
                        Output audio path. 如果play开头,则播放合成语音;如果.wav结尾,则保存语音。
  -m MELLOTRON_PATH, --mellotron_path MELLOTRON_PATH
                        Mellotron model file path
  -w WAVEGLOW_PATH, --waveglow_path WAVEGLOW_PATH
                        WaveGlow model file path
  -g GE2E_PATH, --ge2e_path GE2E_PATH
                        Ge2e model file path
  --mellotron_hparams_path MELLOTRON_HPARAMS_PATH
                        Mellotron hparams json file path
  --waveglow_kwargs_json WAVEGLOW_KWARGS_JSON
                        Waveglow kwargs json

web_api

语音合成WEB接口。 构建简单的语音合成服务。

  • 简单使用
from ttskit import web_api

web_api.app.run(host='0.0.0.0', port=2718, debug=False)
# 用POST或GET方法请求:http://localhost:2718/tts,传入参数text、audio、speaker。
# 例如GET方法请求:http://localhost:2718/tts?text=这是个例子&audio=2
  • 使用说明

http_server

语音合成简易界面。 构建简单的语音合成网页服务。

  • 简单使用
from ttskit import http_server

http_server.start_sever()
# 打开网页:http://localhost:9000/ttskit
  • 命令行
tkhttp

usage: tkhttp [-h] [--device DEVICE] [--host HOST] [--port PORT]

optional arguments:
  -h, --help       show this help message and exit
  --device DEVICE  设置预测时使用的显卡,使用CPU设置成-1即可
  --host HOST      IP地址
  --port PORT      端口号
  • 网页界面 index

resource

模型数据等资源。

audio model reference_audio

  • 内置发音人映射表
_speaker_dict = {
    1: 'Aibao', 2: 'Aicheng', 3: 'Aida', 4: 'Aijia', 5: 'Aijing',
    6: 'Aimei', 7: 'Aina', 8: 'Aiqi', 9: 'Aitong', 10: 'Aiwei',
    11: 'Aixia', 12: 'Aiya', 13: 'Aiyu', 14: 'Aiyue', 15: 'Siyue',
    16: 'Xiaobei', 17: 'Xiaogang', 18: 'Xiaomei', 19: 'Xiaomeng', 20: 'Xiaowei',
    21: 'Xiaoxue', 22: 'Xiaoyun', 23: 'Yina', 24: 'biaobei', 25: 'cctvfa',
    26: 'cctvfb', 27: 'cctvma', 28: 'cctvmb', 29: 'cctvmc', 30: 'cctvmd'
}

encoder

声音编码器(encoder)

  • 把语音音频编码为指定维度的向量。
  • 向量的相似度反映音频音色的相似度。如果两个音频的编码向量相似度越高,则这两个音频的音色越接近。
  • 编码向量主要用于控制发音的音色。

GE2E声音编码器

  • 谷歌在上发布了GE2E算法的论文,详细介绍了其声纹识别技术的核心实现方法。
  • 这是一种基于批(batch)的训练方法,这种基于批的训练,则是将同一批中每个说话者与其最相似的说话者的声纹特征变得不同。
  • 论文通过理论和实验论证了,这种始终针对最困难案例进行优化的训练方式,能够极大地提升训练速度和效果。

mellotron

语音合成器(synthesizer)

  • 把文本转为语音频谱数据。
  • 语音合成器接收声音编码向量和文本数据,然后结合这些信息把文本转为语音频谱。
  • 语音合成器的任务是把文本转为语音频谱,本质上是序列到序列的任务。
  • 可以把文本看做一个一个字组成的序列,把语音频谱看做是由一个一个语音特征组成的序列,语音合成器就是把文字序列转为语音特征序列的桥梁。
  • 语音合成器的关键就是怎样建立模型让文字准确的转为正确的读音,而且放在正确的位置,同时读音前后应当衔接自然,而且整个语音听起来也应当自然。
  • 要实现这样的目标,应当做很有针对性的模型。

Mellotron语音合成器

  • Mellotron是英伟达团队提出的语音合成模型,主要目标是做韵律风格转换和音乐生成。
  • Mellotron可以更加精细化的调整韵律和音调,将基频信息引入模型刻画声调信息,基频是区别音高的主要元素。
  • Mellotron模型的训练完全端到端化,不需要在数据集中含有音乐数据也可以生成音乐。
  • Mellotron不需要对音调和文本进行人为的对齐就可以学到两者之间的对齐关系。
  • Mellotron可以使用外部输入的注意力映射表,从而实现韵律迁移。

waveglow

声码器(vocoder)

  • 把语音频谱数据转为语音信号。
  • 语音信号和语音频谱数据并不是简单可以相互转换的数据,语音转为频谱是有信息丢失的,但是频谱记录了语音最主要的信息,可以通过其他技术手段把语音频谱尽可能逼真地逆变为语音信号,声码器就是这样的技术。
  • 声码器是把声音特征转为语音信号的技术。
  • 在语音合成任务中,声码器是负责把语音频谱转为语音信号。
  • 通常语音频谱记录的语音信息并不是全面的,例如mel频谱只是记录了部分频段的幅度信息,缺失了相位信息,而且许多频率的信息也丢失了。
  • 而声码器模型就是要从这样的频谱中尽可能准确全面地还原出语音信号。
  • 现在通常的方案是用深度学习的方法来解决,针对语音特征和语音信号的关系进行建模。

Waveglow声码器

  • WaveGlow是英伟达团队提出的一种依靠流的从梅尔频谱图合成高质量语音的模型。
  • Waveglow贡献是基于流的网络,结合了Glow和WaveNet的想法,因此网络称为WaveGlow 。
  • WaveGlow是一个生成模型,通过从分布采样中生成音频。
  • WaveGlow易于实施,仅使用单个网络进行训练,仅使用似然损失函数进行训练。
  • WaveGlow是兼顾生成速度快、生成质量高、稳定性强的模型。
Comments
  • 长文本合成音频,总是只有最后一句。

    长文本合成音频,总是只有最后一句。

    长文本合成音频,总是只有最后一句。 #!usr/bin/env python

    -- coding: utf-8 --

    from ttskit import sdk_api var='工业和信息化部总工程师田玉龙在国新办新闻发布会上介绍' wav = sdk_api.tts_sdk_for(var,speaker='cctvfa', output=r'E:\TTS\ttskits\my9.wav')

    opened by MoYuFly 1
  • 你好,这里有个错误,ImportError: cannot import name '_speaker_dict'

    你好,这里有个错误,ImportError: cannot import name '_speaker_dict'

    这里的resource 是ttskit包里面的resource吗,并没有看到这个函数 (pytorch1.6) C:\Users\Administrator>python E:\TTS\ttskit\myTest.py Traceback (most recent call last): File "E:\TTS\ttskit\myTest.py", line 3, in from ttskit import sdk_api File "E:\TTS\ttskit\ttskit\sdk_api.py", line 49, in from .resource import _speaker_dict ImportError: cannot import name '_speaker_dict'

    opened by MoYuFly 1
  • ImportError

    ImportError

    发生异常: ImportError cannot import name 'replace_tone2_style_dict_to_default' from 'pypinyin.utils' (D:\Program Files\Python\lib\site-packages\pypinyin\utils.py) File "E:\Work\Github\ttskit\TTS_kit\ttskit\mellotron\text_init.py", line 7, in from phkit.chinese import text_to_sequence as text_to_sequence_phkit, sequence_to_text, text2pinyin File "E:\Work\Github\ttskit\TTS_kit\ttskit\mellotron\data_utils.py", line 24, in from mellotron.text import text_to_sequence, cmudict File "E:\Work\Github\ttskit\TTS_kit\ttskit\mellotron\inference.py", line 26, in from .data_utils import transform_mel, transform_text, transform_f0, transform_embed, transform_speaker File "E:\Work\Github\ttskit\TTS_kit\ttskit\sdk_api.py", line 37, in from ttskit.mellotron import inference as mellotron File "E:\Work\Github\ttskit\TTS_kit\ttskit_init_.py", line 50, in import sdk_api File "E:\Work\Github\ttskit\TTS_kit\test.py", line 31, in test_http_server from ttskit import http_server File "E:\Work\Github\ttskit\TTS_kit\test.py", line 42, in test_http_server()

    opened by wslsj888 1
  • 网页版快速使用流程(亲测有效)

    网页版快速使用流程(亲测有效)

    1. 从 GitHub下载代码并解压,将文件夹 ttskit-main 作为自己的项目文件夹
    2. 从百度网盘(下载地址) 下载 resource 将其放到 ttskit-main\ttskit 文件夹中覆盖原有的 resource 文件夹
    3. 以上是作者提供步骤的大致描述, 但有一个小问题
    4. 实际上, 在替换过程中, ttskit-main\ttskit\resoure\__init__.py 不要替换
    5. 或者 完全替换之后, 再把 github 下载的文件解压一份, 然后用那里面的 resoure\__init__.py 单独替换回来
    6. 然后就在 ttskit-main 目录打开命令行, 输入并回车 pip install -U ttskit
    7. pip 结束后, 在 ttskit-main 文件夹中建立一个 demo.py 文件, 并输入以下代码
      from ttskit import http_server
      
      http_server.start_sever()```
      
    8. 然后在命令行中输入 py demo.py
    9. 过一段时间, 命令行最下端出现一段网址, 将它复制到浏览器粘贴即可
    10. 示例图片: image image
    opened by Nekobaex 2
  • ydub/audio_segment.py

    ydub/audio_segment.py", line 374, in __radd__ raise TypeError("Gains must be the second addend after the " TypeError: Gains must be the second addend after the AudioSegment 2022-10-27T02:42:03Z {'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '50648', 'HTTP_HOST': 'localhost:9000', (hidden keys: 26)} failed with TypeError

    ydub/audio_segment.py", line 374, in radd raise TypeError("Gains must be the second addend after the " TypeError: Gains must be the second addend after the AudioSegment 2022-10-27T02:42:03Z {'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '50648', 'HTTP_HOST': 'localhost:9000', (hidden keys: 26)} failed with TypeError

    opened by jinfagang 0
Owner
KDD
Move on, work hard, do what you do.
KDD
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

null 186 Dec 24, 2022
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Alexander Veysov 3.2k Dec 31, 2022
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

Chung-Ming Chien 1k Dec 30, 2022
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Snm Logic 1 Dec 20, 2021
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
This repository contains data used in the NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems

Proteno This is the data release associated with the corresponding NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deploymen

null 37 Dec 4, 2022
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration This repo contains only model Implementation of Zero-Shot Text-to-Speech for Text

Rishikesh (ऋषिकेश) 33 Sep 22, 2022
glow-speak is a fast, local, neural text to speech system that uses eSpeak-ng as a text/phoneme front-end.

Glow-Speak glow-speak is a fast, local, neural text to speech system that uses eSpeak-ng as a text/phoneme front-end. Installation git clone https://g

Rhasspy 8 Dec 25, 2022
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing

PhoNLP is a multi-task learning model for joint part-of-speech (POS) tagging, named entity recognition (NER) and dependency parsing. Experiments on Vietnamese benchmark datasets show that PhoNLP produces state-of-the-art results, outperforming a single-task learning approach that fine-tunes the pre-trained Vietnamese language model PhoBERT for each task independently.

VinAI Research 109 Dec 2, 2022
SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, multi-microphone signal processing and many others.

SpeechBrain 5.1k Jan 9, 2023
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.0.1 1.1.0 1.2.0 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 ubuntu18/python3.8/pip ubuntu18

ESPnet 5.9k Jan 3, 2023
Mirco Ravanelli 2.3k Dec 27, 2022
Espresso: A Fast End-to-End Neural Speech Recognition Toolkit

Espresso Espresso is an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning libra

Yiming Wang 919 Jan 3, 2023
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 86 Jun 11, 2021
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

?? Contributing to OpenSpeech ?? OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform ta

Openspeech TEAM 513 Jan 3, 2023
ExKaldi-RT: An Online Speech Recognition Extension Toolkit of Kaldi

ExKaldi-RT is an online ASR toolkit for Python language. It reads realtime streaming audio and do online feature extraction, probability computation, and online decoding.

Wang Yu 31 Aug 16, 2021
IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models

IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models. Everything is pure Python and PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.

Digital Phonetics at the University of Stuttgart 247 Jan 5, 2023