Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Overview
Comments
  • data process

    data process

    the data you provide is right?? when I use preprocessed_KAIROS this file ,I don't find the following items which used in dataloader: 'input_token_ids': input_token_ids, 'input_attn_mask': input_attn_mask, 'tgt_token_ids': tgt_token_ids, 'tgt_attn_mask': tgt_attn_mask, 'doc_key': doc_keys, image

    opened by Hou-jing 6
  • TapKey Model Missing

    TapKey Model Missing

    Hello, I really like this project! I think the TapKey model for event trigger detection is not included in the repository. I would like to use it. I could just be missing it as well. Let me know. Thanks so much!

    opened by helblazer811 3
  • RAMS dataset missing test files

    RAMS dataset missing test files

    In scripts/test_rams.sh, when in head eval mode, the input file "data/RAMS_1.0/data/test_head.jsonlines" is missing in downloaded rams dataset. Also found missing file of "data/RAMS_1.0/data/test_head_coref.jsonlines".

    Could you upload the above two missing files? Thanks.

    opened by Sethcat 3
  • The Length of Document

    The Length of Document

    Hi, thanks for your nice work. When analyzing data, I find many documents are very long. I find in your code, the MAX_LENGTH is 424. However, in Figure 4 of the paper: image the distance of many informative args is greater than MAX_LENGTH, so many args are not even inputted to the encoder?

    opened by Wangpeiyi9979 3
  • 'BartConstrainedGen' object has no attribute 'postprocess_next_token_scores'

    'BartConstrainedGen' object has no attribute 'postprocess_next_token_scores'

    Thanks for sharing the code!

    I could successfully training model on the wikievents dataset. But got the error saying "'BartConstrainedGen' object has no attribute postprocess_next_token_scores" when run scripts/test_kairos.sh.

    It seems postprocess_next_token_scores method is not include in the BartConstrainedGen class (src/genie/constrained_gen.py). Is there something missing in the code or any other cause ? Look forward to your reply!

    opened by Sethcat 3
  • Only 10 F1 score on wikievent dataset

    Only 10 F1 score on wikievent dataset

    Hi, I tried to follow scripts/train_kairos.sh and scripts/test_kairos.sh but only received low performance as follow:

    Role identification: P: 16.88, R: 4.456, F: 7.18 Role: P: 15.58, R: 4.21, F: 6.63 Coref Role identification: P: 19.48, R: 5.26, F: 8.29 Coref Role: P: 15.58, R: 4.21, F: 6.63

    Even I tried to have more epochs , I can only get F1 score around 10. Is there anything goes wrong?

    By the way, I failed to download the ckpt you shared on s3 due to a network error, is there any other way to acquire these files?

    Thanks.

    opened by Changhy1996 2
  • Testing the checkpoint on WikiEvents dataset

    Testing the checkpoint on WikiEvents dataset

    Hi, could you just quickly summarize the steps, required to test the downloaded checkpoint on the WikiEvents dataset?

    As I have observed, the WikiEvents dataset is actually referred to as KAIROS in some parts of the code - it also uses KAIROS data module which requires, for example, the test file to be located in preprocessed_KAIROS/test.jsonl.

    I did the following steps:

    1. I have downloaded the WikiEvents dataset from S3 and stored in at data/wikievents.
    2. I have downloaded the checkpoints from S3 which are stored at checkpoints/Wikievents/ (note that the directory contains both epoch=1-v0.ckpt and epoch=2-v0.ckpt).
    3. I had to add the "--coref_dir" argument to the scripts/test_KAIROS.sh as it is referring to some other (non-existing) directory by default.
    4. The command for "train.py" is the following:
    python train.py --model=constrained-gen --ckpt_name=WikiEvents-pred \
        --load_ckpt=checkpoints/WikiEvents/epoch=2-v0.ckpt \
        --dataset=KAIROS \
        --eval_only \
        --train_file=data/wikievents/train.jsonl \
        --val_file=data/wikievents/dev.jsonl \
        --test_file=data/wikievents/test.jsonl \
        --coref_dir=data/wikievents/coref \
        --train_batch_size=4 \
        --eval_batch_size=4 \
        --learning_rate=3e-5 \
        --accumulate_grad_batches=4 \
        --num_train_epochs=3
    

    Note that this throws an error as it still tries to load the test_file from "preprocessed_KAIROS/test.jsonl". 5. Hoping to fix the issue, I have copied the data/wikievents/ to ./preprocessed_KAIROS/. Unfortunately, I get the following error:

      File "/home/patrik/gen-arg/src/genie/data.py", line 15, in my_collate                                                
        doc_keys = [ex['doc_key'] for ex in batch]                                                                         
      File "/home/patrik/gen-arg/src/genie/data.py", line 15, in <listcomp>                                                
        doc_keys = [ex['doc_key'] for ex in batch]                                                                         
    KeyError: 'doc_key'
    

    Do you maybe have an idea about what am I doing wrong?

    Best, Patrik

    opened by pzajec 2
  • Access denied when downloading checkpoints from S3

    Access denied when downloading checkpoints from S3

    Hi!

    When executing $ aws s3 ls s3://gen-arg-data/checkpoints/ or $ aws s3 cp s3://gen-arg-data/checkpoints/ ./ --recursive, I get the following error:

    "An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied".

    I am not sure if the issue is on my side?

    opened by pzajec 2
  • The event arguments annotation.

    The event arguments annotation.

    Since there are coreferential entity mentions in a document, what is the principle to decide which entity mention should be annotated as the argument of an event? Besides, I find entity coreference clusters stored in split.jsonlines, but I do not find the event coreference clusters. Where are they stored?

    opened by taolusi 1
  • where the

    where the "restrict the vocabulary of words to Vc" mentioned in the paper is reflected in the code?

    Hi, thanks for sharing code!

    I would like to ask where the "restrict the vocabulary of words to Vc" mentioned in the paper is reflected in the code?

    In the code, it seems that the token with the highest probability is taken as the result by default.

    image

    opened by chennuo666 1
  • tgr-pred-file

    tgr-pred-file

    Hi, thank you for this amazing work. When we try to test the model with pipeline_scorer it requires tgr-pred-file. The question is how to get this file for kairos dataset?

    opened by fatimashiri 1
  • Clarification needed on the implementation of Equation 4 of the paper

    Clarification needed on the implementation of Equation 4 of the paper

    Hi,

    Thank you for sharing your amazing work! I need some clarification regarding the implementation of equation 4 mentioned in the paper. From the initial reading of the code, it appears to me that, type clarification statements weren't used in the implementation.

    Am I missing anything? Any help would be much appreciated!

    opened by abhik1505040 1
  • Share pretrained class vectors and tagger checkpoints

    Share pretrained class vectors and tagger checkpoints

    Thank you for the great work. Could you please share the pretrained class vectors and tagger checkpoints for the tagger, for example all_class_vec_KAIROS.pt? Also I cannot quite figure out how to reproduce zero-shot event extraction (paragraph 4.4 of your paper). What should be the process (command line with arguments) to extract a new event? Alternatively, if I add more events to the KAIROS ontology, how could I fine-tune for them? As I understand, the tagger checkpoint is created in train_tagger.py. Then I'd need to use it to classify documents and add event types to training/test/dev data. Are there scripts for that?

    opened by AlecS12 0
  • Multiple arguments of the same argument role

    Multiple arguments of the same argument role

    Hi. I'm dealing with a dataset that has several argument roles, and each role might have multiple arguments. For example, '<arg1>, <arg2>, <arg3>, <arg4>, <arg5> has participate in an military activity on <arg6> ... ' In this sentence, arg1 to arg5 is the same role but different arguments, they are all the role 'countries'. And the problem is, each data in my dataset might have different number of 'countries'. In my example there are five countries which is arg1 to arg5, while in other data it might be just 3 countries which is arg1 to arg3 or just 1 countries. And I tried to use template like '<arg1>, <arg2>, <arg3>, <arg4>, <arg5> has participate in an military activity on <arg6> ... ', but I got pretty bad result, because it seems to be predicting a lot of '<arg> <arg> <arg>' and the result sentence looks absurd. If I use use template like '<arg1> has participate in an military activity on <arg2> ... ', the result is normal and acceptable, but in this way I can only predict a single one countries(arg1). Is there a way to deal with this? Thanks.

    opened by PureLoveForyou 1
  • Tuning the model to handle imbalanced data

    Tuning the model to handle imbalanced data

    Love the paper.

    I've tried it on my own closed domain dataset and achieved poor recall.

    Role identification: P: 49.30, R: 28.43, F: 36.06
    Role: P: 44.41, R: 25.60, F: 32.48
    Coref Role identification: P: 69.93, R: 40.32, F: 51.15
    Coref Role: P: 48.60, R: 28.02, F: 35.55
    

    I believe the low recall is due to imbalanced labels, but I value recall over precision. Is there some way to tune the model to increase recall at the cost of precision?

    opened by jeremytanjianle 1
Owner
Zoey Li
Zoey Li
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

null 56 Jan 2, 2023
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs Hi this is the source code of our paper "ATP: AMRize Then Parse! Enhancing AMR Parsing w

Chen Liang 13 Nov 23, 2022
Codes for NAACL 2021 Paper "Unsupervised Multi-hop Question Answering by Question Generation"

Unsupervised-Multi-hop-QA This repository contains code and models for the paper: Unsupervised Multi-hop Question Answering by Question Generation (NA

Liangming Pan 70 Nov 27, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Clova AI Research 34 Apr 13, 2022
NAACL'2021: Factual Probing Is [MASK]: Learning vs. Learning to Recall

OptiPrompt This is the PyTorch implementation of the paper Factual Probing Is [MASK]: Learning vs. Learning to Recall. We propose OptiPrompt, a simple

Princeton Natural Language Processing 150 Dec 20, 2022
Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021

Contextualized Perturbation for Textual Adversarial Attack Introduction This is a PyTorch implementation of Contextualized Perturbation for Textual Ad

cookielee77 30 Jan 1, 2023
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 7, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis ?? A more detailed re

Lincedo Lab 4 Jun 9, 2021
Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021)

L1-Refinement Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021) ?? A more detailed readme is co

Lincedo Lab 4 Jun 9, 2021
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
Pytorch implementation of Supporting Clustering with Contrastive Learning, NAACL 2021

Supporting Clustering with Contrastive Learning SCCL (NAACL 2021) Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ram

null 231 Jan 5, 2023
✅ How Robust are Fact Checking Systems on Colloquial Claims?. In NAACL-HLT, 2021.

How Robust are Fact Checking Systems on Colloquial Claims? Official PyTorch implementation of our NAACL paper: Byeongchang Kim*, Hyunwoo Kim*, Seokhee

Byeongchang Kim 19 Mar 15, 2022
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

João Fonseca 3 Jan 3, 2023
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022