Datasets of Automatic Keyphrase Extraction

Overview

Datasets of Automatic Keyphrase Extraction

This repository contains 20 annotated datasets of Automatic Keyphrase Extraction made available by the research community. Following are the datasets and the original papers that proposed them. If you know more datasets, and want to contribute, please, notify me. You might also want to have a look at Florian Boudin keyphrase extraction repository.

Dataset Language Type of Doc Domain #Docs #Gold Keys (per doc) #Tokens per doc Absent GoldKey
110-PT-BN-KP PT News Misc. 110 2610 (23.73) 304.00 2.5%
500N-KPCrowd-v1.1 EN News Misc. 500 24459 (48.92) 408.33 13.5%
Inspec EN Abstract Comp. Science 2000 29230 (14.62) 128.20 37.7%
Krapivin2009 EN Paper Comp. Science 2304 14599 (6.34) 8040.74 15.3%
Nguyen2007 EN Paper Comp. Science 209 2369 (11.33) 5201.09 17.8%
PubMed EN Paper Comp. Science 500 7620 (15.24) 3992.78 60.2%
Schutz2008 EN Paper Comp. Science 1231 55013 (44.69) 3901.31 13.6%
SemEval2010 EN Paper Comp. Science 243 4002 (16.47) 8332.34 11.3%
SemEval2017 EN Paragraph Misc. 493 8969 (18.19) 178.22 0.0%
WikiNews FR News Misc. 100 1177 (11.77) 293.52 5.0%
cacic ES Paper Comp. Science 888 4282 (4.82) 3985.84 2.2%
citeulike180 EN Paper Misc. 183 3370 (18.42) 4796.08 32.2%
fao30 EN Paper Agriculture 30 997 (33.23) 4777.70 41.7%
fao780 EN Paper Agriculture 779 6990 (8.97) 4971.79 36.1%
kdd EN Paper Comp. Science 755 3831 (5.07) 75.97 53.2%
pak2018 PL Abstract Misc. 50 232 (4.64) 97.36 64.7%
theses100 EN Msc/Phd Thesis Misc. 100 767 (7.67) 4728.86 47.6%
wicc ES Paper Comp. Science 1640 7498 (4.57) 1955.56 2.7%
wiki20 EN Research Report Comp. Science 20 730 (36.50) 6177.65 51.8%
www EN Paper Comp. Science 1330 7711 (5.80) 84.08 55.0%



110-PT-BN-KP

Dateset: 110-PT-BN-KP

Cite: Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization

Description: The 110-PT-BN-KP is a TV Broadcast News (BN) dataset that contains 110 transcription text documents from 8 broadcast news programs from the European Portuguese ALERT BN database ranging from politics, sports, finance and other broadcast news. After the speech to text transcription, each news was manually reexamined to fix any segmentation error and the gold keywords were created asking one tagger to extract all keywords that summarize the document content.


500N-KPCrowd-v1.1

Dateset: 500N-KPCrowd-v1.1

Cite: Keyphrase cloud generation of broadcast news

Description: 500N-KPCrowd-v1.1 is a broadcast news transcription dataset. This dataset consists of 500 English broadcast news stories from 10 different categories (art and culture; business; crime; fashion; health; politics us; politics world; science; sports; technology) with 50 docs per category. The ground truth is built using Amazon’s Mechanical Turk service to recruit and manage taggers. Multiple annotators were required to look at the same news story and assign a set of keywords from the text itself. The final ground truth consists of keywords selected at least by 90% of the taggers.


cacic

Dateset: cacic

Cite: Keyword Identification in Spanish Documents using Neural Networks

Description: The cacic collection is a Spanish dataset formed by a set of scientific articles published between 2005 and 2013 and consist of 888 scientific papers published in the Argentine Congress of Computer Science CACIC.


citeulike180

Dateset: citeulike180

Cite: Human-competitive Tagging Using Automatic Keyphrase Extraction

Description: The citeulike180 dataset is based on CiteULike.org platform which organizes academic citations. The CiteULike.org corpus is a full-text paper collection freely available that contains information about the users who tagged the documents. The dataset is based on a subset of CiteULike.org containing documents that have been indexed with at least three keywords on which at least two users have agreed. As well as filtering the document set, the dataset only considers annotators who have at least two additional co-annotators tagging the same common document. The result is a set of 180 documents indexed by 332 taggers, where most documents are related to the area of bioinformatics.


fao30

Dateset: fao30

Cite: Domain‐independent automatic keyphrase indexing with small training sets

Description: fao30 dataset is based on agricultural documents obtained from the two datasets based on Food and Agriculture Organization (FAO) of the United Nations, with 30 documents. It is full-text documents randomly selected from the FAO’s repository, where the keywords were manually assigned by six professional annotators at FAO.


fao780

Dateset: fao780

Cite: Domain‐independent automatic keyphrase indexing with small training sets

Description: fao780 dataset is based on agricultural documents obtained from the two datasets based on Food and Agriculture Organization (FAO) of the United Nations, with 780 documents. It is full-text documents randomly selected from the FAO’s repository, where the keywords were manually tagged by professional FAO staff with terms from the Agrovoc thesaurus.


Inspec

Dateset: Inspec

Cite: Improved automatic keyword extraction given more linguistic knowledge

Description: Inspec consists of 2,000 abstracts of scientific journal papers from Computer Science collected between the years 1998 and 2002. Each document has two sets of keywords assigned: the controlled keywords, which are manually controlled assigned keywords that appear in the Inspec thesaurus but may not appear in the document, and the uncontrolled keywords which are freely assigned by the editors, i.e., are not restricted to the thesaurus or to the document. In our repository, we consider a union of both sets as the ground-truth.


kdd

Dateset: kdd

Cite: Extracting Keyphrases from Research Papers using Citation Networks

Description: The KDD collection is based on the abstracts of papers collected from the ACM Conference on Knowledge Discovery and Data Mining (KDD) published during the period 2004-2014, with 755 documents. The gold-keywords of these papers are the author-labeled terms.


Krapivin2009

Dateset: Krapivin2009

Cite: Large dataset for keyphrases extraction

Description: The Krapivin2009 is the biggest dataset in terms of documents, with 2,304 full papers from the Computer Science domain, which were published by ACM in the period ranging from 2003 to 2005. The papers were downloaded from CiteSeerX Autonomous Digital Library and each one has its keywords assigned by the authors and verified by the reviewers.


Nguyen2007

Dateset: Nguyen2007

Cite: Keyphrase Extraction in Scientific Publications

Description: The Nguyen2007 is a dataset composed of 211 scientific conference papers. The gold keywords were manually assigned by volunteers’ students who were given three papers to read. The keywords assigned by the authors of the paper were hidden to avoid bias.


pak2018

Dateset: pak2018

Cite: YAKE! Keyword Extraction from Single Documents using Multiple Local Features

Description: pak2018 is a dataset in Polish formed by 50 abstracts of journals on technical topics collected from Measurement Automation and Monitoring (in Polish “Pomiary, Automatyka, Kontrola”). The gold keywords are those author-assigned, resulting in 2-6 keywords per document.


PubMed

Dateset: PubMed

Cite: The NLM Indexing Initiative

Description: PubMed dataset is based on full-text papers collected from PubMed Central, which comprises over 26 million citations for biomedical literature from MEDLINE, life science journals, and online books. It consists of 500 papers selected from the same source. PubMed uses the Medical Subject Headings MeSH, a controlled vocabulary thesaurus used for indexing articles for PubMed, as the gold keywords to the documents.


Schutz2008

Dateset: Schutz2008

Cite: Keyphrase Extraction from Single Documents in the Open Domain Exploiting Linguistic and Statistical Methods

Description: Schutz2008 dataset is based on full-text papers collected from PubMed Central, which comprises over 26 million citations for biomedical literature from MEDLINE, life science journals, and online books. It consists of 1,231 papers selected from PubMed Central that the documents are distributed across 254 different journals, ranging from Abdominal Imaging to World Journal of Urology. These keywords assigned by the authors are hidden in the article and used as gold keywords.


SemEval2010

Dateset: SemEval2010

Cite: Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles

Description: SemEval2010 consists of 244 full scientific papers extracted from the ACM Digital Library (one of the most popular datasets which have been previously used for keyword extraction evaluation), each one ranging from 6 to 8 pages and belonging to four different computer science research areas (distributed systems; information search and retrieval; distributed artificial intelligence – multiagent systems; social and behavioral sciences – economics). Each paper has an author-assigned set of keywords (which are part of the original pdf file) and a set of keywords assigned by professional editors, both of which, may or may not appear explicitly in the text.


SemEval2017

Dateset: SemEval2017

Cite: Semeval 2017 task 10: Scienceie-extracting keyphrases and relations from scientific publications

Description: SemEval2017 consists of 500 paragraphs selected from 500 ScienceDirect journal articles, evenly distributed among the domains of Computer Science, Material Sciences and Physics. Each text has a number of keywords selected by one undergraduate student and an expert annotator. The expert's annotation is prioritized whenever there is disagreement between both annotators. The original purpose is extracting keywords and relations from scientific publications.


theses100

Dateset: theses100

Cite: Originally downloaded from zelandiya github account

Description: The theses100 dataset consists of 100 full master and Ph.D. theses from the University of Waikato, New Zeland. The domain of the theses made available is quite different ranging from chemistry, computer science, economics to psychology, philosophy, history, and others.


wicc

Dateset: wicc

Cite: Keyword Identification in Spanish Documents using Neural Networks

Description: The wicc dataset is composed of 1640 scientific articles published between 1999 and 2012 of the Workshop of Researchers in Computer Science WICC.


wiki20

Dateset: wiki20

Cite: Topic indexing with Wikipedia

Description: wiki20 consists of 20 English technical research reports covering different aspects of computer science. Fifteen teams, each consisting of two senior computer science undergraduates assigned keywords to each report using Wikipedia article titles as the candidate vocabulary. The teams were instructed to assign around 5 keywords to each document. Each team assigned 5.7 keywords on average.


WikiNews

Dateset: WikiNews

Cite: TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction

Description: WikiNews is a French corpus created from the French version of WikiNews that contains 100 news articles published between May 2012 and December 2012 and manually annotated by at least three students.


www

Dateset: www

Cite: Extracting Keyphrases from Research Papers using Citation Networks

Description: the WWW collection is based on the abstracts of papers collected from the World Wide Web Conference (WWW) published during the period 2004-2014, with 1330 documents. The gold-keywords of these papers are the author-labeled terms.


You might also like...
open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

中文开放信息抽取系统, open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"

GDAP The code of paper "Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"" Event Datasets Prep

Blue Brain text mining toolbox for semantic search and structured information extraction
Blue Brain text mining toolbox for semantic search and structured information extraction

Blue Brain Search Source Code DOI Data & Models DOI Documentation Latest Release Python Versions License Build Status Static Typing Code Style Securit

Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

Spert NLP Relation Extraction API deployed with torchserve for inference

SpERT torchserve Spert_torchserve is the Relation Extraction model (SpERT)Span-based Entity and Relation Transformer API deployed with pytorch/serve.

Perform sentiment analysis and keyword extraction on Craigslist listings

craiglist-helper synopsis Perform sentiment analysis and keyword extraction on Craigslist listings Background I love Craigslist. I've found most of my

Code to reproduce the results of the paper 'Towards Realistic Few-Shot Relation Extraction' (EMNLP 2021)

Realistic Few-Shot Relation Extraction This repository contains code to reproduce the results in the paper "Towards Realistic Few-Shot Relation Extrac

Contact Extraction with Question Answering.

contactsQA Extraction of contact entities from address blocks and imprints with Extractive Question Answering. Goal Input: Dr. Max Mustermann Hauptstr

Comments
  • Finding twitter dataset result.

    Finding twitter dataset result.

    Can anyone point out a dataset related to twitter posts from this repository? The PDF below gives the source link to this repository for twitter dataset results, which I cannot find in this repo.

    http://www.ccc.ipt.pt/~ricardo/ficheiros/Tese_AFarinha.pdf

    opened by divy9881 2
  • Request for pdf access

    Request for pdf access

    Thanks for the great work. Our Experiments/Benchmarking require us to access pdfs of the datasets. I would really appreciate it if you point us to the proper way in which we can access the pdfs of the datasets. Even, pdfs of few datasets would be a great help.

    opened by immortal3 1
  • Add new Datasets

    Add new Datasets

    Hello,

    First of all: amazing repository! You are doing great work there. I have a recommendation of a huge keyword dataset to add:

    • The Directory of Open Access Journals -- 80 languages -- 7,005,466 article records https://doaj.org/. Most of the papers here have both abstracts and keywords.

    Thank you, Agnieszka

    opened by AgaMiko 3
  • Issue with 500N-KPCrowd-v1.1

    Issue with 500N-KPCrowd-v1.1

    Hi there,

    Thank you for maintaining this excellent repo about keyphrase datasets. I am checking the dataset 500N-KPCrowd-v1.1 and I find there are some files that are improperly truncated (only title is included, e.g. sports-20933921.txt). It seems they are all from the test split. Would you please take a look at it?

    Thank you, Rui

    opened by memray 0
Owner
LIAAD - Laboratory of Artificial Intelligence and Decision Support
LIAAD is an R&D laboratory at INESCTEC. It is one of the associated units of INESC Tec which is funded by Fundação para a Ciência e a Tecnologia (FCT, Portugal)
LIAAD - Laboratory of Artificial Intelligence and Decision Support
A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

tfds-korean A collection of Korean Text Datasets ready to use using Tensorflow-Datasets. TensorFlow-Datasets를 이용한 한국어/한글 데이터셋 모음입니다. Dataset Catalog |

Jeong Ukjae 20 Jul 11, 2022
Abhijith Neil Abraham 2 Nov 5, 2021
Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.

TextBlob: Simplified Text Processing Homepage: https://textblob.readthedocs.io/ TextBlob is a Python (2 and 3) library for processing textual data. It

Steven Loria 8.4k Dec 26, 2022
Python implementation of TextRank for phrase extraction and summarization of text documents

PyTextRank PyTextRank is a Python implementation of TextRank as a spaCy pipeline extension, used to: extract the top-ranked phrases from text document

derwen.ai 1.9k Jan 6, 2023
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3.9k Jan 3, 2023
Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.

TextBlob: Simplified Text Processing Homepage: https://textblob.readthedocs.io/ TextBlob is a Python (2 and 3) library for processing textual data. It

Steven Loria 7.5k Feb 17, 2021
Python implementation of TextRank for phrase extraction and summarization of text documents

PyTextRank PyTextRank is a Python implementation of TextRank as a spaCy pipeline extension, used to: extract the top-ranked phrases from text document

derwen.ai 1.4k Feb 17, 2021
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3k Feb 17, 2021
SpikeX - SpaCy Pipes for Knowledge Extraction

SpikeX is a collection of pipes ready to be plugged in a spaCy pipeline. It aims to help in building knowledge extraction tools with almost-zero effort.

Erre Quadro Srl 384 Dec 12, 2022