Tracking Progress in Natural Language Processing

Overview

Tracking Progress in Natural Language Processing

Table of contents

English

Vietnamese

Hindi

Chinese

For more tasks, datasets and results in Chinese, check out the Chinese NLP website.

French

Russian

Spanish

Portuguese

Korean

Nepali

Bengali

Persian

Turkish

German

This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets.

It aims to cover both traditional and core NLP tasks such as dependency parsing and part-of-speech tagging as well as more recent ones such as reading comprehension and natural language inference. The main objective is to provide the reader with a quick overview of benchmark datasets and the state-of-the-art for their task of interest, which serves as a stepping stone for further research. To this end, if there is a place where results for a task are already published and regularly maintained, such as a public leaderboard, the reader will be pointed there.

If you want to find this document again in the future, just go to nlpprogress.com or nlpsota.com in your browser.

Contributing

Guidelines

Results   Results reported in published papers are preferred; an exception may be made for influential preprints.

Datasets   Datasets should have been used for evaluation in at least one published paper besides the one that introduced the dataset.

Code   We recommend to add a link to an implementation if available. You can add a Code column (see below) to the table if it does not exist. In the Code column, indicate an official implementation with Official. If an unofficial implementation is available, use Link (see below). If no implementation is available, you can leave the cell empty.

Adding a new result

If you would like to add a new result, you can just click on the small edit button in the top-right corner of the file for the respective task (see below).

Click on the edit button to add a file

This allows you to edit the file in Markdown. Simply add a row to the corresponding table in the same format. Make sure that the table stays sorted (with the best result on top). After you've made your change, make sure that the table still looks ok by clicking on the "Preview changes" tab at the top of the page. If everything looks good, go to the bottom of the page, where you see the below form.

Fill out the file change information

Add a name for your proposed change, an optional description, indicate that you would like to "Create a new branch for this commit and start a pull request", and click on "Propose file change".

Adding a new dataset or task

For adding a new dataset or task, you can also follow the steps above. Alternatively, you can fork the repository. In both cases, follow the steps below:

  1. If your task is completely new, create a new file and link to it in the table of contents above.
  2. If not, add your task or dataset to the respective section of the corresponding file (in alphabetical order).
  3. Briefly describe the dataset/task and include relevant references.
  4. Describe the evaluation setting and evaluation metric.
  5. Show how an annotated example of the dataset/task looks like.
  6. Add a download link if available.
  7. Copy the below table and fill in at least two results (including the state-of-the-art) for your dataset/task (change Score to the metric of your dataset). If your dataset/task has multiple metrics, add them to the right of Score.
  8. Submit your change as a pull request.
Model Score Paper / Source Code

Wish list

These are tasks and datasets that are still missing:

  • Bilingual dictionary induction
  • Discourse parsing
  • Keyphrase extraction
  • Knowledge base population (KBP)
  • More dialogue tasks
  • Semi-supervised learning
  • Frame-semantic parsing (FrameNet full-sentence analysis)

Exporting into a structured format

You can extract all the data into a structured, machine-readable JSON format with parsed tasks, descriptions and SOTA tables.

The instructions are in structured/README.md.

Instructions for building the site locally

Instructions for building the website locally using Jekyll can be found here.

Comments
  • Conll-2003 uncomparable results

    Conll-2003 uncomparable results

    Because of the small size the training set of Conll-2003, some authors incorporated the development set as a part of training data after tuning the hyper-parameters. Consequently, not all results are directly comparable.

    Train+dev:

    Flair embeddings (Akbik et al., 2018) Peters et al. (2017) Yang et al. (2017)

    Maybe those results should be marked by an asterisk

    opened by ghaddarAbs 28
  • NLP Progress Graph

    NLP Progress Graph

    Hi Sebastian, loved your idea for this repo. I was thinking if we can have a graph, something like this

    showing progress of different tasks in NLP based on the updates to their markdown file. I have created a shell script which clones your repo into my local, counts the no of commit for different files and using python/pandas preprocess the result and create a bar chart out of it and uploads it to a free image uploading service.

    Currently, it shows count of all the commit for a specific file but if we can have a guideline for adding new results, fixing errors .. Maybe different identifiers

    Then we can count the no of times, a new result has been added to an NLP task. This can help in visualizing the NLP areas of most active/Improving research.

    Currently, the graph doesn't make much sense but over the time it will improve as we update with more results.

    Also, If you think something like this can benefit the community, i can create a cron job on my pc(i don't have a server) which will update the image url with the latest graph which you can show on the main page.

    opened by nirmalsinghania2008 16
  • YAML - pros and cons

    YAML - pros and cons

    I'd like to discuss here the pros and cons of using YAML going forward or whether we should stick with Markdown tables. Here are some pros and cons, mainly from @NirantK (in https://github.com/sebastianruder/NLP-progress/pull/116), @stared (in https://github.com/sebastianruder/NLP-progress/issues/43, https://github.com/sebastianruder/NLP-progress/pull/64) and myself.

    Pros:

    • Easier trend spotting in performance improvements
    • Easy to create plots and visualizations going forward
    • Data is separated from presentation

    Cons:

    • Hard for contributors, e.g. HTML omissions can't be spotted without setting up Jekyll locally
    • Github Repo becomes useless for readers, relying exclusively on nlpprogress.com
    • Many visualizations (e.g. bar charts) based on performance numbers are not more useful than the raw tables

    Other opinions are welcome.

    opened by sebastianruder 10
  • What about other languages?

    What about other languages?

    Thanks for this work!

    These pages seem to cover the progress only for English (well, except MT). Do you have plans to include other languages?

    One extreme example is POS tagging and dependency parsing. UD has 60+ languages :) For others, there should be very limited data

    opened by Hrant-Khachatrian 10
  • Incorrect BLEU score for English-Hindi MT System

    Incorrect BLEU score for English-Hindi MT System

    The BLEU score written in the Document is 89.35 which looks wrong to me. The referred paper mentions a BLEU score of 12.83 which itself is not state-of-the-art for the language pair.

    opened by kartikeypant 7
  • add G2P conversion task of schwa deletion to Hindi

    add G2P conversion task of schwa deletion to Hindi

    There's been a good body of previous work on schwa deletion in NLP/CL, you can see some of it in our paper. It'll be good to keep track of the SOTA on it since it's an important task for G2P conversion in North Indian languages.

    opened by aryamanarora 6
  • Added new task: data-to-text generation

    Added new task: data-to-text generation

    I have added a new task of Data-to-Text Natural Language Generation (D2T NLG). D2T NLG differs from other NLG tasks such as MT or QA in a way that the input to text generation system is a structured representation (table, knowledge graph, or JSON) instead of unstructured text. This document provides an overview of three most recent and popular datasets available publicly for D2T NLG. With the advancements in deep learning - several novel neural methods are being proposed that are capable of generating accurate, fluent and diverse texts.

    opened by ashishu007 6
  • Explain relation to paperswithcode.com

    Explain relation to paperswithcode.com

    Since the inception of this great repository of state-of-the-art results, alternatives such as paperswithcode.com have gained traction. This raises the question of the usefulness of keeping both resources up to date with the latest results. Could users and maintainers of this repository perhaps elaborate a bit, here and/or the README, how they see this resource relating to paperswithcode.com and particularly what nlpprogress.com does well that the former does not?

    opened by cwenner 6
  • add TCAN results to LM

    add TCAN results to LM

    To be honest, I'm a bit skeptical about their results and asked them some questions via email. So let's put a hold on this pull request for now (unless the maintainers think it's fine) and I will update it when they answered my questions.

    opened by Separius 6
  • Add missing LM SOTA result + # params + prev SOTA

    Add missing LM SOTA result + # params + prev SOTA

    Add missing LM ensemble which is SOTA for PTB. Add second-in-line LM SOTA for strict interpretation. Add number of params for LM results.

    (unsure why it lists commits that have already been merged)

    opened by cwenner 6
  • Data in YAML for structure and plots

    Data in YAML for structure and plots

    Related to #43.

    Right now did some demo for CCG. I didn't work on the plot form, just wanted to show it is possible and easy. Also - I think that data form can be standarized - so it would be simpler to add more complicated things (e.g. further comments, links to multiple implementations, etc).

    See files in:

    • _data - data in YAML format
    • _includes - for ways of converting data into its presentations (tables, charts, etc)
    • ccg_supertagging.md to see how to include these

    IMHO YAML is cleaner for writing and reading than markdown tables, so it is an advantage on its own. From my experience contributors (ones who use GitHub) have no slightest problem in using YAML (vide https://p.migdal.pl/interactive-machine-learning-list/).

    Right now I generate data through Liquid template.

    opened by stared 6
  • Pull request with new emotion detection dataset

    Pull request with new emotion detection dataset

    There seems to be some conflicts, therefore I am not resolving it as it might remove some code. So could you be kind to resolve them and merge my request?

    opened by KhondokerIslam 0
  • Update paraphrase-generation.md

    Update paraphrase-generation.md

    MULTIPIT, MULTIPITCROWD and MULTIPITEXPERT

    Past efforts on creating paraphrase corpora only consider one paraphrase criteria without taking into account the fact that the desired “strictness” of semantic equivalence in paraphrases varies from task to task (Bhagat and Hovy, 2013; Liu and Soh, 2022). For example, for the purpose of tracking unfolding events, “A tsunami hit Haiti.” and “303 people died because of the tsunami in Haiti” are sufficiently close to be considered as paraphrases; whereas for paraphrase generation, the extra information “303 people dead” in the latter sentence may lead models to learn to hallucinate and generate more unfaithful content. In this paper, the authors present an effective data collection and annotation method to address these issues.

    MULTIPIT is a topic Paraphrase in Twitter corpus that consists of a total of 130k sentence pairs with crowdsoursing (MULTIPITCROWD ) and expert (MULTIPITEXPERT ) annotations. MULTIPITCROWD is a large crowdsourced set of 125K sentence pairs that is useful for tracking information onTwitter. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 92.00 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    MULTIPITEXPERT is an expert annotated set of 5.5K sentence pairs using a stricter definition that is more suitable for acquiring paraphrases for generation purpose. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 83.20 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    opened by adrienpayong 0
  • add this to machine translation,. Is it okay?

    add this to machine translation,. Is it okay?

    opened by adrienpayong 0
Releases(v0.3)
Owner
Sebastian Ruder
Research Scientist @DeepMind
Sebastian Ruder
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
Implementation of Natural Language Code Search in the project CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT-Implementation In this repo we have replicated the paper CodeBERT: A Pre-Trained Model for Programming and Natural Languages. We are interest

Tanuj Sur 4 Jul 1, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Dec 30, 2022
💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

Explosion 24.9k Jan 2, 2023
🗣️ NALP is a library that covers Natural Adversarial Language Processing.

NALP: Natural Adversarial Language Processing Welcome to NALP. Have you ever wanted to create natural text from raw sources? If yes, NALP is for you!

Gustavo Rosa 21 Aug 12, 2022
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 2.1k Jan 1, 2023
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing

Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing Trankit is a light-weight Transformer-based Pyth

null 652 Jan 6, 2023
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing

PORORO: Platform Of neuRal mOdels for natuRal language prOcessing pororo performs Natural Language Processing and Speech-related tasks. It is easy to

Kakao Brain 1.2k Dec 21, 2022
💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

Explosion 19.5k Feb 13, 2021
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.

State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 ?? Transformers provides thousands of pretrained models to perform tasks o

Hugging Face 77.3k Jan 3, 2023
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

John Snow Labs 3k Jan 5, 2023
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 1.9k Feb 3, 2021
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Jan 2, 2023
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 7, 2023
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022