YACLC - Yet Another Chinese Learner Corpus

Overview

汉语学习者文本多维标注数据集YACLC V1.0

中文 | English

汉语学习者文本多维标注数据集(Yet Another Chinese Learner Corpus,YACLC)由北京语言大学、清华大学、北京师范大学、云南师范大学、东北大学、上海财经大学等高校组成的团队共同发布。主要项目负责人有杨麟儿、杨尔弘、孙茂松、张宝林、胡韧奋、何姗、岳岩、饶高琦、刘正皓、陈云等。

简介

汉语学习者文本多维标注数据集(Yet Another Chinese Learner Corpus,YACLC)是一个大规模的、提供偏误多维标注的汉语学习者文本数据集。我们招募了百余位汉语国际教育、语言学及应用语言学等专业背景的研究生组成标注团队,并采用众包策略分组标注。每个句子由10位标注员进行标注,每位标注员需要给出0或1的句子可接受度评分,以及纠偏标注(Grammatical Error Correction)和流利标注(Fluency based Correction)两个维度的标注结果。纠偏标注是从语法层面对偏误句进行修改,遵循忠实原意、最小改动的原则,将偏误句修改为符合汉语语法规范的句子;流利标注是将句子修改得更为流利和地道,符合母语者的表达习惯。在标注时,若句子可接受度评分为0,则标注员至少需要完成一条纠偏标注,同时可以进行流利标注。若句子可接受度评分为1,则标注员只需给出流利标注。本数据集可用于语法纠错、文本校对等自然语言处理任务,也可为汉语二语教学与习得、语料库语言学等研究领域提供数据支持。

数据规模

训练集规模为8,000条,每条数据包括原始句子及其多种纠偏标注与流利标注。验证集和测试集规模都为1,000条,每条数据包括原始句子及其全部纠偏标注与流利标注。

数据格式

每条数据中包含汉语学习者所写的待标注句子及其id、所属篇章id、所属篇章标题、标注员数量以及多维标注信息。其中,多维度标注信息包括:

  • 标注维度,"1"表示纠偏标注,"0"表示流利标注;
  • 标注后的正确文本;
  • 标注中的修改操作数量;
  • 提供该标注的标注员数量。

注意:测试集数据无标注者和多维标注信息。

数据样例如下:

{
  "sentence_id": 4308, // 句子id
  "sentence_text": "我只可以是坐飞机去的,因为巴西离英国到远极了。", // 学习者原句文本
  "article_id": 7267, // 该句所属的篇章id
  "article_name": "我放假的打算", // 篇章标题
  "total_annotators": 10, // 共多少个标注者参与了该句的标注
  "sentence_annos": [ // 多维标注信息
    {
      "is_grammatical": 1, // 标注维度:1表示纠偏标注,0表示流利标注
      "correction": "我只能坐飞机去,因为巴西离英国远极了。", // 修改后的正确文本
      "edits_count": 3, // 共有几处修改操作
      "annotator_count": 6 // 共有几个标注者修改为了这一结果
    },
    { // 下同
      "is_grammatical": 1,
      "correction": "我只能是坐飞机去的,因为巴西离英国远极了。",
      "edits_count": 2,
      "annotator_count": 1
    },
    {
      "is_grammatical": 1,
      "correction": "我只可以坐飞机去,因为巴西离英国远极了。",
      "edits_count": 3,
      "annotator_count": 2
    },
    {
      "is_grammatical": 0,
      "correction": "我只能坐飞机去,因为巴西离英国太远了。",
      "edits_count": 6,
      "annotator_count": 2
    }
  ]
}

评测代码使用

提交结果为文本文件,每行为一个修改后的句子,并与测试集中的数据逐条对应。

  • 每条测试集中的数据仅需给出一条修改结果;
  • 修改结果需使用THULAC工具包分词,请提交分词后的结果。

对于提交的结果文件output_file,后台将调用eval.py将其同标准答案文件test_gold_m2进行比较:

python eval.py output_file test_gold_m2

评测指标为F_0.5,输出结果示例:

{
  "Precision": 70.63,	
  "Recall": 37.04,
  "F_0.5": 59.79
}

引用

如果您使用了本数据集,请引用以下技术报告:

@article{wang-etal-2021-yaclc,
  title={YACLC: A Chinese Learner Corpus with Multidimensional Annotation}, 
  author={Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, Maosong Sun},
  journal={arXiv preprint arXiv:2112.15043},
  year = {2021}
}

相关资源

“文心·写作”演示系统:https://writer.wenmind.net/

语法改错论文列表: https://github.com/blcuicall/GEC-Reading-List


Introduction

YACLC is a large-scale Chinese learner text dataset, providing multi-dimensional annotations, jointly released by a team composed of Beijing Language and Culture University, Tsinghua University, Beijing Normal University, Yunnan Normal University, Northeastern University, Shanghai University of Finance and Economics.

We recruited more than 100 students majoring in Chinese International Education, Linguistics, and Applied Linguistics for crowdsourcing annotation. Each sentence is annotated by 10 annotators, and each annotator needs to give a sentence acceptability score of 0 or 1, as well as grammatical error correction and fluency-based correction. The grammatical corrections follows the principle of minimum modification to make the learners' text meet the Chinese grammatical standards. The fluency-based correction modifies the sentence to be more fluent and authentic, in line with the expression habits of native speakers. This dataset can be used for multiple NLP tasks such as grammatical error correction and spell checking. It can also provide data support for research fields such as Chinese second language teaching and acquisition, corpus linguistics, etc.

Size

YACLC V1.0 contains the train (8,000 instances), validation (1,000 instances) and test (1,000 instances) sets. Each instance of train set includes 1 sentence written by Chinese learner, various grammatical error corrections and fluent error corrections of the sentence. While, each instance in the validation and test set includes all the corrections provided by the annotators.

Format

Each instance is composed of the informations of the sentence, annotators and the multi-dimensional annotations. The multi-dimensional annotation includes:

  • dimensioning, "1" indicates grammatical error correction, and "0" indicates fluency-based correction;
  • correct sentence after annotation;
  • number of edit operations in this annotation;
  • number of annotators for this annotation.

Here is an example:

{
  "sentence_id": 4308, // 
  "sentence_text": "我只可以是坐飞机去的,因为巴西离英国到远极了。",
  "article_id": 7267, // the article id that this sentence belongs to
  "article_name": "我放假的打算", // the title of this sentence
  "total_annotators": 10, // the number of annotators for this sentence
  "sentence_annos": [ // multi-dimensional annotations
    {
      "is_grammatical": 1, // 1: grammatical, 0: fluent
      "correction": "我只能坐飞机去,因为巴西离英国远极了。", // corrected sentence 
      "edits_count": 3, // the number of edits of this annotation
      "annotator_count": 6 // the number of annotators for this annotation
    },
    { 
      "is_grammatical": 1,
      "correction": "我只能是坐飞机去的,因为巴西离英国远极了。",
      "edits_count": 2,
      "annotator_count": 1
    },
    {
      "is_grammatical": 1,
      "correction": "我只可以坐飞机去,因为巴西离英国远极了。",
      "edits_count": 3,
      "annotator_count": 2
    },
    {
      "is_grammatical": 0,
      "correction": "我只能坐飞机去,因为巴西离英国太远了。",
      "edits_count": 6,
      "annotator_count": 2
    }
  ]
}

Usage of the Evaluation Code

The submission result is a text file, where each line is a corrected sentence and corresponds to the instance in the test set one by one.

  • Only one correction needs to be given for the each instance in the test set.
  • Please submit results after word segmentation using the THULAC toolkit.

For a submitted result file output_file,we will call the script eval.py to compare it with the golden standard test_gold_m2

python eval.py output_file test_gold_m2

The Evaluation Metric is F_0.5:

{
  "Precision": 70.63,
  "Recall": 37.04,
  "F_0.5": 59.79
}

Citation

Please cite our technical report if you use this dataset:

@article{wang-etal-2021-yaclc,
  title={YACLC: A Chinese Learner Corpus with Multidimensional Annotation},
  author={Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, Maosong Sun},
  journal={arXiv preprint arXiv:2112.15043},
  year = {2021}
}
You might also like...
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

File-based TF-IDF: Calculates keywords in a document, using a word corpus.

File-based TF-IDF Calculates keywords in a document, using a word corpus. Why? Because I found myself with hundreds of plain text files, with no way t

pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation

pkuseg:一个多领域中文分词工具包 (English Version) pkuseg 是基于论文[Luo et. al, 2019]的工具包。其简单易用,支持细分领域分词,有效提升了分词准确度。 目录 主要亮点 编译和安装 各类分词工具包的性能对比 使用方式 论文引用 作者 常见问题及解答 主要

Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

Chinese segmentation library

What is loso? loso is a Chinese segmentation system written in Python. It was developed by Victor Lin ([email protected]) for Plurk Inc. Copyright &

a chinese segment base on crf

Genius Genius是一个开源的python中文分词组件,采用 CRF(Conditional Random Field)条件随机场算法。 Feature 支持python2.x、python3.x以及pypy2.x。 支持简单的pinyin分词 支持用户自定义break 支持用户自定义合并词

Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

A fast Text-to-Speech (TTS) model. Work well for English, Mandarin/Chinese, Japanese, Korean, Russian and Tibetan (so far). 快速语音合成模型,适用于英语、普通话/中文、日语、韩语、俄语和藏语(当前已测试)。

简体中文 | English 并行语音合成 [TOC] 新进展 2021/04/20 合并 wavegan 分支到 main 主分支,删除 wavegan 分支! 2021/04/13 创建 encoder 分支用于开发语音风格迁移模块! 2021/04/13 softdtw 分支 支持使用 Sof

A framework for cleaning Chinese dialog data

A framework for cleaning Chinese dialog data

Owner
BLCU-ICALL
ICALL Research Group at Beijing Language and Culture University
BLCU-ICALL
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 8, 2022
vits chinese, tts chinese, tts mandarin

vits chinese, tts chinese, tts mandarin 史上训练最简单,音质最好的语音合成系统

AmorTX 12 Dec 14, 2022
Yet Another Sequence Encoder - Encode sequences to vector of vector in python !

Yase Yet Another Sequence Encoder - encode sequences to vector of vectors in python ! Why Yase ? Yase enable you to encode any sequence which can be r

Pierre PACI 12 Aug 19, 2021
Yet another Python binding for fastText

pyfasttext Warning! pyfasttext is no longer maintained: use the official Python binding from the fastText repository: https://github.com/facebookresea

Vincent Rasneur 230 Nov 16, 2022
Yet another Python binding for fastText

pyfasttext Warning! pyfasttext is no longer maintained: use the official Python binding from the fastText repository: https://github.com/facebookresea

Vincent Rasneur 228 Feb 17, 2021
Yet Another Compiler Visualizer

yacv: Yet Another Compiler Visualizer yacv is a tool for visualizing various aspects of typical LL(1) and LR parsers. Check out demo on YouTube to see

Ashutosh Sathe 129 Dec 17, 2022
Yet Another Neural Machine Translation Toolkit

YANMTT YANMTT is short for Yet Another Neural Machine Translation Toolkit. For a backstory how I ended up creating this toolkit scroll to the bottom o

Raj Dabre 121 Jan 5, 2023
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 227 Jan 2, 2023