A toolkit for document-level event extraction, containing some SOTA model implementations

Overview

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker

Source code for ACL-IJCNLP 2021 Long paper: Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker.

Our code is based on Doc2EDAG.

0. Introduction

Document-level event extraction aims to extract events within a document. Different from sentence-level event extraction, the arguments of an event record may scatter across sentences, which requires a comprehensive understanding of the cross-sentence context. Besides, a document may express several correlated events simultaneously, and recognizing the interdependency among them is fundamental to successful extraction. To tackle the aforementioned two challenges, We propose a novel heterogeneous Graph-based Interaction Model with a Tracker (GIT). A graph-based interaction network is introduced to capture the global context for the scattered event arguments across sentences with different heterogeneous edges. We also decode event records with a Tracker module, which tracks the extracted event records, so that the interdependency among events is taken into consideration. Our approach delivers better results over the state-of-the-art methods, especially in cross-sentence events and multiple events scenarios.

  • Architecture model overview

  • Overall Results

1. Package Description

GIT/
├─ dee/
    ├── __init__.py
    ├── base_task.py
    ├── dee_task.py
    ├── ner_task.py
    ├── dee_helper.py: data features constrcution and evaluation utils
    ├── dee_metric.py: data evaluation utils
    ├── config.py: process command arguments
    ├── dee_model.py: GIT model
    ├── ner_model.py
    ├── transformer.py: transformer module
    ├── utils.py: utils
├─ run_dee_task.py: the main entry
├─ train_multi.sh
├─ run_train.sh: script for training (including evaluation)
├─ run_eval.sh: script for evaluation
├─ Exps/: experiment outputs
├─ Data.zip
├─ Data: unzip Data.zip
├─ LICENSE
├─ README.md

2. Environments

  • python (3.6.9)
  • cuda (11.1)
  • Ubuntu-18.0.4 (5.4.0-73-generic)

3. Dependencies

  • numpy (1.19.5)
  • torch (1.8.1+cu111)
  • pytorch-pretrained-bert (0.4.0)
  • dgl-cu111 (0.6.1)
  • tensorboardX (2.2)

PS: The environments and dependencies listed here is different from what we use in our paper, so the results may be a bit different.

4. Preparation

  • Unzip Data.zip and you can get an Data folder, where the training/dev/test data locate.

5. Training

>> bash run_train.sh

6. Evaluation

>> bash run_eval.sh

(The evaluation is also conducted after the training)

7. License

This project is licensed under the MIT License - see the LICENSE file for details.

8. Citation

If you use this work or code, please kindly cite the following paper:

@inproceedings{xu-etal-2021-git,
    title = "Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker",
    author = "Runxin Xu  and
      Tianyu Liu  and
      Lei Li and
      Baobao Chang",
    booktitle = "The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)",
    year = "2021",
    publisher = "Association for Computational Linguistics",
}
Comments
  • a query about role embedding

    a query about role embedding

    Hi, when I read the paper, I notice the words "role embedding" as follows. I read your codes carefully, but I still can't understand how to get it. Is it embedded by role_idx mapping? Moreover, I'd appreciate it if you could tell me where it is implemented in the code. Thanks!

    image

    opened by chensming 7
  • Why does NER module in GIT use the encoder-decoder transformer rather than the encoder part only(like BERT used in Doc2EDAG)

    Why does NER module in GIT use the encoder-decoder transformer rather than the encoder part only(like BERT used in Doc2EDAG)

    Thanks for the excellent paper. But I have a question for the experiment setting: As a sequence labeling task, we always use transformer encoder(like BERT) to solve the NER problem, for example, the author in Doc2EDAG use BERT as the first step backbone. However, in the paper, it is said that the vanilla transformer(encoder-decoder structure) is used in NER module, which confuse me a lot. I am wondering wher the decoder part of transformer is used for? Thanks. Screen Shot 2021-08-11 at 5 52 07 PM

    opened by YuanEric88 2
  • A problem occurred at runtime

    A problem occurred at runtime

    subprocess.CalledProcessError: Command '['/opt/anaconda3/envs/Doc2edag/bin/python', '-u', 'run_dee_task.py', '--local_rank=1', '--resume_latest_cpt', 'True', '--save_cpt_flag', 'True', '--data_dir', './Data', '--exp_dir', './Exps', '--task_name', 'try', '--num_train_epochs', '50', '--train_batch_size', '64', '--gradient_accumulation_steps', '8', '--eval_batch_size', '2', '--cpt_file_name', 'GIT']' returned non-zero exit status 1.

    May I ask why this problem is? In addition, I have successfully reproduced Doc2EDAG, but the extraction results of event parameters are all numbers and cannot be visualized. How do you solve this problem

    opened by jl109 2
  • Released code fail to achieve the performance in paper. 代码无法复现出论文中的指标。

    Released code fail to achieve the performance in paper. 代码无法复现出论文中的指标。

    We find the released code fail to achieve the performance in paper. We observe that the max_sent_num in your code is 32 while it's 64 in previous Doc2EDAG. We wonder if it's the reason? 我们发现开源代码无法复现出论文的指标,我们观察到GIT最大句数的设置是32而Doc2EDAG是64。不知道是否是这个原因呢?

    opened by GangZhao98 1
  • 结果文件如何解析成数据集的样式

    结果文件如何解析成数据集的样式

    您好,在尝试用您的模型跑出结果后,得到如:./Exps/try/Output/dee_eval.test.pred_span.GIT.5.pkl的结果文件,其中一条数据是: ''' (0, [0, 0, 1, 1, 0], [None, None, [[None, None, None, None, None, None]], [[(3330, 1290), None, None, None, None, None]], None], DocSpanInfo(span_token_tup_list=[(121, 121, 121, 127, 121, 126), (3943, 3862, 5500, 819), (4507,), (3330, 1290)], span_dranges_list=[[(0, 5, 11)], [(0, 16, 20)], [(9, 0, 1)], [(12, 10, 12), (13, 8, 10)]], span_mention_range_list=[(0, 1), (1, 2), (2, 3), (3, 5)], mention_drange_list=[(0, 5, 11), (0, 16, 20), (9, 0, 1), (12, 10, 12), (13, 8, 10)], mention_type_list=[1, 3, 5, 7, 7], event_dag_info=[None, None, None, None, [{(): {None}}, {(None,): {None}}, {(None, None): {None}}, {(None, None, None): {None}}, {(None, None, None, None): {None}}, {(None, None, None, None, None): {None}}, {(None, None, None, None, None, None): {None}}, {(None, None, None, None, None, None, None): {None}}, {(None, None, None, None, None, None, None, None): {None}}]], missed_sent_idx_list=[1, 4, 7, 8, 9, 10, 12, 13, 14, 16, 19]), [None, None, [(None, None, None, None, None, None)], [(3, None, None, None, None, None)], None]) ''' 请问如何对上述数据进行解析,得到数据集中 ''' "recguid_eventname_eventdict_list": [ [ 0, "EquityPledge", { "Pledger": "李华青", "PledgedShares": "1188600股", "Pledgee": "海通证券股份有限公司", "TotalHoldingShares": "22619999股", "TotalHoldingRatio": "6.41%", "TotalPledgedShares": "18200000股", "StartDate": "2018年9月6日", "EndDate": null, "ReleasedDate": null } ], [ 1, "EquityPledge", { "Pledger": "李华青", "PledgedShares": "12151000股", "Pledgee": "海通证券股份有限公司", "TotalHoldingShares": null, "TotalHoldingRatio": "6.41%", "TotalPledgedShares": "12151000股", "StartDate": "2017年12月7日", "EndDate": null, "ReleasedDate": null } ] ] ''' 的结果? 是否有结果解析的相关代码方便提供? 谢谢!

    opened by jiayuchennlp 2
Owner
人生苦短 及时行乐
null
Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"

GDAP The code of paper "Code for "Generating Disentangled Arguments with Prompts: a Simple Event Extraction Framework that Works"" Event Datasets Prep

null 45 Oct 29, 2022
Mirco Ravanelli 2.3k Dec 27, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
A Non-Autoregressive Transformer based TTS, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS.

A Non-Autoregressive Transformer based TTS, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS.

Keon Lee 237 Jan 2, 2023
An assignment from my grad-level data mining course demonstrating some experience with NLP/neural networks/Pytorch

NLP-Pytorch-Assignment An assignment from my grad-level data mining course (before I started personal projects) demonstrating some experience with NLP

David Thorne 0 Feb 6, 2022
This repository serves as a place to document a toy attempt on how to create a generative text model in Catalan, based on GPT-2

GPT-2 Catalan playground and scripts to train a GPT-2 model either from scrath or from another pretrained model.

Laura 1 Jan 28, 2022
Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

snoop2head 14 Nov 15, 2022
A music comments dataset, containing 39,051 comments for 27,384 songs.

Music Comments Dataset A music comments dataset, containing 39,051 comments for 27,384 songs. For academic research use only. Introduction This datase

Zhang Yixiao 2 Jan 10, 2022
A high-level yet extensible library for fast language model tuning via automatic prompt search

ruPrompts ruPrompts is a high-level yet extensible library for fast language model tuning via automatic prompt search, featuring integration with Hugg

Sber AI 37 Dec 7, 2022
Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings of ACL: ACL 2021)

BERT-for-Surprisal Python Implementation of ``Modeling the Influence of Verb Aspect on the Activation of Typical Event Locations with BERT'' (Findings

null 7 Dec 5, 2022
A tool helps build a talk preview image by combining the given background image and talk event description

talk-preview-img-builder A tool helps build a talk preview image by combining the given background image and talk event description Installation and U

PyCon Taiwan 4 Aug 20, 2022
This repository has a implementations of data augmentation for NLP for Japanese.

daaja This repository has a implementations of data augmentation for NLP for Japanese: EDA: Easy Data Augmentation Techniques for Boosting Performance

Koga Kobayashi 60 Nov 11, 2022
Beautiful visualizations of how language differs among document types.

Scattertext 0.1.0.0 A tool for finding distinguishing terms in corpora and displaying them in an interactive HTML scatter plot. Points corresponding t

Jason S. Kessler 2k Dec 27, 2022
Beautiful visualizations of how language differs among document types.

Scattertext 0.1.0.0 A tool for finding distinguishing terms in corpora and displaying them in an interactive HTML scatter plot. Points corresponding t

Jason S. Kessler 1.5k Feb 17, 2021
SDL: Synthetic Document Layout dataset

SDL is the project that synthesizes document images. It facilitates multiple-level labeling on document images and can generate in multiple languages.

Sơn Nguyễn 0 Oct 7, 2021
Document processing using transformers

Doc Transformers Document processing using transformers. This is still in developmental phase, currently supports only extraction of form data i.e (ke

Vishnu Nandakumar 13 Dec 21, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 7, 2022
CDLA: A Chinese document layout analysis (CDLA) dataset

CDLA: A Chinese document layout analysis (CDLA) dataset 介绍 CDLA是一个中文文档版面分析数据集,面向中文文献类(论文)场景。包含以下10个label: 正文 标题 图片 图片标题 表格 表格标题 页眉 页脚 注释 公式 Text Title

buptlihang 84 Dec 28, 2022
Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation Official Code Repository for the paper "Unsupervised Documen

NLP*CL Laboratory 2 Oct 26, 2021