A 10000+ hours dataset for Chinese speech recognition

Overview

WenetSpeech

A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition

WenetSpeech

Download

Please visit the official website, read the license, and follow the instruction to download the data.

Benchmark

Toolkit Model test_net test_meeting
Kaldi Chain Model
ESPnet Joint CTC/Conformer
WeNet Joint CTC/Conformer

Description

Creation

First, we collect all the data from YouTube and Podcast; Then, OCR is used to label YouTube data, auto trancrition is used to label Podcast data; Finally, a novel end-to-end label error detection method is used to further validate and filter the data.

Categories

In summary, WenetSpeech groups all data into 3 categories, as the following table shows:

Set Hours Confidence Usage
High Label 10005 >=0.95 Supervised Training
Weak Label 2478 [0.6, 0.95] Semi-supervised or noise training
Unlabel 9952 / Unsupervised training or Pre-training
In Total 22435 / All above

High Label Data

All of the data is from Youtube and Podcast, and we tag all the data with its source and domain. We classify the data into 10 groups according to its domain,speaking style, or scenarios.

Domain Youtube Podcast Total
audiobook 0 250.9 250.9
commentary 112.6 135.7 248.3
documentary 386.7 90.5 477.2
drama 4338.2 0 4338.2
interview 324.2 614 938.2
news 0 868 868
reading 0 1110.2 1110.2
talk 204 90.7 294.7
variety 603.3 224.5 827.8
others 144 507.5 651.5
Total 6113 3892 10005

We provide 3 training subsets, namely S, M and L. Subsets S, M are sampled from all the high label data which has the oracle confidence 1.0

Training Subsets Confidence Hours
L [0.95, 1.0] 10005
M 1.0 1000
S 1.0 100

Evaluation Sets

Evaluation Sets Hours Source Description
DEV 20 Internet Specially designed for some speech tools which require cross-validation set in training
TEST_NET 23 Internet Match test
TEST_MEETING 15 Real meeting Mismatch test which is far-field, conversational, and spontaneous meeting speech

Contributors

ACKNOWLEDGEMENTS

  1. WenetSpeech referred a lot of work of GigaSpeech, including metadata design, license design, data encryption, downloading pipeline, and so on. The authors would like to thank Jiayu Du and Guoguo Chen for their suggestions on this work.
  2. The authors would like to thank my college Lianhui Zhang, Yu Mao for collecting some of the YouTube data.
Comments
  • How to access the weakly labeled and unlabeled data?

    How to access the weakly labeled and unlabeled data?

    Hi team!Thanks for providing this dataset.

    after running WenetSpeech/toolkits/kaldi/local/wenetspeech_data_prep.sh with argument --train-subset L, it seems that the kaldi dataset yield from this script contains only 10k hours of the high-label data. What should I do if I want to use the remained part of Wenetspeech dataset?

    Thanks :)

    opened by jctian98 2
  • The mismatch between the marked duration and the actual audio duration.

    The mismatch between the marked duration and the actual audio duration.

    I am using k2 and Lhotse for wenetspeech ASR experiments. But there is an error happened. The error shows as follows: image

    And then I check the actual duration for this sample (its marked duration is 786.44s): 5305fba8604afa0e9cbb3a3ede5903f

    I find the marked duration is 988.89s. 6675f17edaacbd75ec52064adb7de80

    So can we change the marked duration in the original marked transcripts? Or I should filter it with a filtering function to avoid this error?

    opened by luomingshuang 2
  • pretrained weights?

    pretrained weights?

    Dear autor; thanks for published such a large-scale and useful dataset. I wonder have you released some of your pretrained weights? If so, it can save a lot of energy consuption and human resources since the training procedure is relatively large and expensive. Thank you.

    opened by dragen1860 1
  • [Question] about the results

    [Question] about the results

    Hi wenet team, thanks for this open dataset. I have some questions about the results in https://github.com/wenet-e2e/WenetSpeech/blob/main/README.md#benchmark

    1. The espnet model is trained for 50 epochs, while wenet model only trained for half of that (26 epochs), why not both trained for the same iteration number?
    2. The espnet model use an external Transformer LM in decoding, does wenet have the result decoding with an external LM?
    opened by maxwellzh 1
  • Error when untar the encrypted dataset

    Error when untar the encrypted dataset

    After downloading the whole dataset, an error occurs when doing the function process_downloaded_object. This seems to occur when untar the encrypted dataset. 屏幕快照 2022-02-21 上午11 46 48

    opened by ZihanLiao 0
  • utils not find

    utils not find

    when i train wenetspeech using Kaldi,i get an error: ./run.sh: line 45: ./utils/parse_options.sh: No such file or directory.

    in path.sh, export PATH=$PWD/utils/,it will add utils to the path, but in toolkits/kaldi directory, there is no utils.is utils missing?

    opened by jiangno111 0
  • fix process_opus.py

    fix process_opus.py

    Modify the file according to PR https://github.com/wenet-e2e/WenetSpeech/pull/10 to fix the lint error. And we are not going to merge the PR in the future.

    opened by robin1001 0
  • CC-BY-NC vs CC-BY

    CC-BY-NC vs CC-BY

    I think if you want your data to be non-commercial, the license should be CC-BY-NC (https://creativecommons.org/licenses/by-nc/4.0/) rather than CC-BY (https://creativecommons.org/licenses/by/4.0/).

    opened by tshmak 0
Owner
Production First and Production Ready End-to-End Speech Toolkit
null
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 8, 2022
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
vits chinese, tts chinese, tts mandarin

vits chinese, tts chinese, tts mandarin 史上训练最简单,音质最好的语音合成系统

AmorTX 12 Dec 14, 2022
A 30000+ Chinese MRC dataset - Delta Reading Comprehension Dataset

Delta Reading Comprehension Dataset 台達閱讀理解資料集 Delta Reading Comprehension Dataset (DRCD) 屬於通用領域繁體中文機器閱讀理解資料集。 本資料集期望成為適用於遷移學習之標準中文閱讀理解資料集。 本資料集從2,108篇

null 272 Dec 15, 2022
Speech Recognition for Uyghur using Speech transformer

Speech Recognition for Uyghur using Speech transformer Training: this model using CTC loss and Cross Entropy loss for training. Download pretrained mo

Uyghur 11 Nov 17, 2022
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Snm Logic 1 Dec 20, 2021
A fast Text-to-Speech (TTS) model. Work well for English, Mandarin/Chinese, Japanese, Korean, Russian and Tibetan (so far). 快速语音合成模型,适用于英语、普通话/中文、日语、韩语、俄语和藏语(当前已测试)。

简体中文 | English 并行语音合成 [TOC] 新进展 2021/04/20 合并 wavegan 分支到 main 主分支,删除 wavegan 分支! 2021/04/13 创建 encoder 分支用于开发语音风格迁移模块! 2021/04/13 softdtw 分支 支持使用 Sof

Atomicoo 161 Dec 19, 2022
DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

DeepAmandine This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good ex

BuyWithCrypto 3 Apr 19, 2022
CDLA: A Chinese document layout analysis (CDLA) dataset

CDLA: A Chinese document layout analysis (CDLA) dataset 介绍 CDLA是一个中文文档版面分析数据集,面向中文文献类(论文)场景。包含以下10个label: 正文 标题 图片 图片标题 表格 表格标题 页眉 页脚 注释 公式 Text Title

buptlihang 84 Dec 28, 2022
GCRC: A Gaokao Chinese Reading Comprehension dataset for interpretable Evaluation

GCRC GCRC: A New Challenging MRC Dataset from Gaokao Chinese for Explainable Eva

Yunxiao Zhao 5 Nov 4, 2022
2021 AI CUP Competition on Traditional Chinese Scene Text Recognition - Intermediate Contest

繁體中文場景文字辨識 程式碼說明 組別:這就是我 成員:蔣明憲 唐碩謙 黃玥菱 林冠霆 蕭靖騰 目錄 環境套件 安裝方式 資料夾布局 前處理-製作偵測訓練註解檔 前處理-製作分類訓練樣本 part.py : 從 json 裁切出分類訓練樣本 Class.py : 將切出來的樣本按照文字分類到各資料夾

HuanyueTW 3 Jan 14, 2022
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

⚠️ Checkout develop branch to see what is coming in pyannote.audio 2.0: a much smaller and cleaner codebase Python-first API (the good old pyannote-au

pyannote 2.2k Jan 9, 2023
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Alexander Veysov 3.2k Dec 31, 2022
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

Chung-Ming Chien 1k Dec 30, 2022
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Habib Abdurrasyid 5 Dec 28, 2021
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing

PhoNLP is a multi-task learning model for joint part-of-speech (POS) tagging, named entity recognition (NER) and dependency parsing. Experiments on Vietnamese benchmark datasets show that PhoNLP produces state-of-the-art results, outperforming a single-task learning approach that fine-tunes the pre-trained Vietnamese language model PhoBERT for each task independently.

VinAI Research 109 Dec 2, 2022