Improving Multilingual Lexical Normalization by Fine-tuning ByT5 ÚFAL at MultiLexNorm 2021:
David Samuel & Milan Straka
Charles University
Faculty of Mathematics and Physics
Institute of Formal and Applied Linguistics
Paper (TODO)
Interactive demo on Google Colab
HuggingFace models (TODO)
This is the official repository for the winning entry to the W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm) shared task, which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on ByT5, which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these source files, we also release the fine-tuned models on HuggingFace (TODO) and an interactive demo on Google Colab.
How to run
🐾
Clone repository and install the Python requirements
git clone https://github.com/ufal/multilexnorm2021.git
cd multilexnorm2021
pip3 install -r requirements.txt
🐾
Initialize
Run the inialization script to download the official MultiLexNorm data together with a dump of English Wikipedia. We recommend downloading Wikipidia dumps to get clean multi-lingual data, but other data sources should also work.
./initialize.sh
🐾
Train
To train a model for English lexical normalization, simply run the following script. Other configurations are located in the config
folder.
python3 train.py --config config/en.yaml
Please cite the following publication
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}