Source code for "A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction" @ NAACL 2022

Overview

TSAR

Source code for NAACL 2022 paper: A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction.

๐Ÿ”ฅ Introduction

We focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document. To address these issues, we propose a Two-Stream Abstract meaning Representation enhanced extraction model (TSAR). TSAR encodes the document from different perspectives by a two-stream encoding module, to utilize local and global information and lower the impact of distracting context. Besides, TSAR introduces an AMR-guided interaction module to capture both intra-sentential and inter-sentential features, based on the locally and globally constructed AMR semantic graphs. An auxiliary boundary loss is introduced to enhance the boundary information for text spans explicitly. You can refer to our paper for more details.

๐Ÿš€ How to use our code?

1. Dependencies

  • pytorch==1.9.0
  • transformers==4.8.1
  • datasets==1.8.0
  • dgl-cu111==0.6.1
  • tqdm==4.49.0
  • spacy==3.2.4

For the usage of spacy, the following command could be helpful.

>> pip install https://github.com/explosion/spacy-models/releases/download en_core_web_sm-3.2.0/en_core_web_sm-3.2.0.tar.gz

2. Data Preprocessing

You can first download the datasets and some scripts here. You only need to unzip the data.zip.

Then Go to data/wikievents folder and run the following command, which is used to transfer the data formats.

>> python transfer.py

Then we parse the AMR results for our data. Please refer to here. We use exactly the same AMR parser. After you have successfully installed the parser, you can simply run the following command in the transition-amr-parser folder.

>> python amrparse.py

Then we transfer preprocess for DGL graph by the following script.

>> python amr2dglgraph.py

We also directly provide the data here (coming soon). In this way, you can just skip the AMR and DGL graph preprocessing steps.

3. Training and Evaluation

The training scripts are provided.

>> bash run_rams_base.sh
>> bash run_rams_large.sh
>> bash run_wikievents_base.sh
>> bash run_wikievents_large.sh

You can change the settings in the corresponding scripts.

And you can evaluate the model by the following scripts.

>> bash evaluate_rams.sh
>> bash evaluate_wikievent.sh

๐ŸŒ Citation

If you use this work or code, please kindly cite the following paper:

@inproceedings{xu-etal-2022-tsar,
    title = "A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction",
    author = "Runxin Xu and Peiyi Wang 
    and Tianyu Liu and Shuang Zeng
    and Baobao Chang and Zhifang Sui",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).",
    year = "2022"
}
Comments
  • size mismatched when loading best saved model

    size mismatched when loading best saved model

    I was trying to load the best model from the trainer but I encounter that the size is mismatched and was not able to load the model.

    model = MyBertmodel.from_pretrained('./wikievents-base/checkpoint-22680')
    

    The error I got was:

    RuntimeError: Error(s) in loading state_dict for MyBertmodel:
    	size mismatch for middle_layer.0.weight: copying a param with shape torch.Size([768, 3840]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
    

    Am I loading it wrongly? or what's the correct way to load the best model again?

    opened by jefflink 4
  • Missing 'text'.py in load_dataset in run.py

    Missing 'text'.py in load_dataset in run.py

    I tried running the examples given but for the bash run_xx.sh which uses run.py, it fails in the following line

    datasets = load_dataset("text.py", data_files={'train': data_args.train_file,
                                                    'validation': data_args.validation_file,
                                                    'test': data_args.test_file})
    

    Is there a missing text.py somewhere?

    opened by jefflink 3
  • Why construct negative examples?

    Why construct negative examples?

    Hi, there! I want to ask the reason why you construct negative examples, which are subsequently masked out in the loss calculation, I don't understand this operation. Thanks a lot!

    opened by LazyFyh 1
  • Excuse me , what's the

    Excuse me , what's the "text.py" in run.py lines 209 refers to ?

    When I run the code , I met with the error "ConnectionError: Couldn't reach https://raw.githubusercontent.com /huggingface /datasets /1.8.0/datasets/text.py/text.py" , so I wonder "text.py" is a local file or others ?

    opened by cathyry 1
  • cannot find AMR parser.

    cannot find AMR parser.

    hi dear, I cannot find AMRParser this module(attached picture1) & donot know how to use this code block(attached picture 2).

    appreciate for your reply.

    image image

    opened by exoskeletonzj 1
  • amr parser error

    amr parser error

    Sorry to bother you, I tried to run amr2dglgraph.py after running amrparse.py, but it failed. What is the problem?

    Using backend: pytorch
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                                                                             | 1464/3241 [02:52<04:40,  6.34it/s]=======>  No AMR graph!!!!!
    =======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                                                                             | 1466/3241 [02:52<04:17,  6.88it/s]=======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                                                                             | 1467/3241 [02:53<04:41,  6.30it/s]=======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                                                                             | 1468/3241 [02:53<04:58,  5.94it/s]=======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                                                                             | 1469/3241 [02:53<07:12,  4.10it/s]=======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                                                                             | 1470/3241 [02:53<06:43,  4.39it/s]=======>  No AMR graph!!!!!
     45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                                                                             | 1471/3241 [02:54<06:16,  4.70it/s]=======>  No AMR graph!!!!!
     61%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                                                      | 1992/3241 [03:59<03:11,  6.51it/s]=======>  No AMR graph!!!!!
     84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹                       | 2710/3241 [05:16<00:34, 15.24it/s]=======>  No AMR graph!!!!!
    =======>  No AMR graph!!!!!
     84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š                       | 2712/3241 [05:16<00:35, 15.01it/s]=======>  No AMR graph!!!!!
     84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                      | 2720/3241 [05:17<00:36, 14.12it/s]=======>  No AMR graph!!!!!
    =======>  No AMR graph!!!!!
     84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                      | 2722/3241 [05:17<00:36, 14.09it/s]=======>  No AMR graph!!!!!
    =======>  No AMR graph!!!!!
     84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž                      | 2724/3241 [05:17<00:46, 11.04it/s]=======>  No AMR graph!!!!!
    =======>  No AMR graph!!!!!
    100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3241/3241 [06:20<00:00,  8.52it/s]
    {'ARG0': 89026, 'snt1': 5888, 'ARG1': 129892, 'ARG2': 36997, 'op1': 82021, 'mod': 45236, 'location': 20666, 'part-of': 2002, 'time': 24170, 'op2': 35852, 'snt2': 6923, 'mode': 348, 'name': 56088, 'rel': 17801, 'polarity': 4515, 'medium': 837, 'weekday': 2617, 'op3': 8579, 'op4': 3160, 'purpose': 2569, 'source': 1156, 'concession-of': 119, 'degree': 1208, 'month': 4103, 'year': 4535, 'poss': 5452, 'topic': 2187, 'quant': 16130, 'dayperiod': 790, 'location-of': 921, 'day': 3259, 'concession': 525, 'ARG4': 1324, 'snt3': 3105, 'direction': 623, 'manner': 3145, 'instrument': 585, 'quant-of': 131, 'age': 991, 'unit': 5047, 'op5': 1110, 'op6': 477, 'consist-of': 523, 'example': 403, 'ARG3': 3075, 'condition': 618, 'prep-with': 264, 'accompanier': 396, 'op7': 316, 'op8': 211, 'op9': 144, 'domain': 2417, 'duration': 1171, 'prep-against': 64, 'poss-of': 106, 'time-of': 608, 'duration-of': 7, 'ord': 975, 'beneficiary': 343, 'prep-in': 6, 'op10': 91, 'op11': 66, 'op12': 50, 'ARG6': 104, 'timezone': 128, 'value': 1073, 'op13': 46, 'op14': 39, 'op15': 39, 'op16': 36, 'op17': 34, 'op18': 22, 'op19': 19, 'op20': 19, 'op21': 19, 'op22': 18, 'op23': 17, 'op24': 17, 'op25': 17, 'op26': 17, 'op27': 17, 'op28': 17, 'op29': 17, 'op30': 13, 'op31': 13, 'op32': 13, 'op33': 13, 'op34': 13, 'op35': 13, 'op36': 13, 'op37': 13, 'op38': 13, 'op39': 13, 'op40': 13, 'op41': 13, 'op42': 13, 'op43': 13, 'op44': 13, 'op45': 12, 'op46': 12, 'op47': 12, 'op48': 11, 'op49': 11, 'op50': 11, 'op51': 11, 'op52': 11, 'op53': 11, 'op54': 11, 'op55': 11, 'ARG5': 234, 'destination': 339, 'snt4': 732, 'snt5': 801, 'snt6': 87, 'subevent-of': 167, 'prep-as': 111, 'polarity-of': 172, 'part': 52, 'li': 239, 'path': 85, 'degree-of': 8, 'frequency': 328, 'manner-of': 83, 'extent': 10, 'prep-on-behalf-of': 29, 'topic-of': 1, 'season': 27, 'decade': 63, 'root': 36, 'prep-under': 57, 'prep-to': 11, 'calendar': 6, 'polite': 11, 'medium-of': 3, 'op56': 7, 'op57': 7, 'op58': 7, 'op59': 7, 'op60': 7, 'op61': 7, 'op62': 7, 'op63': 7, 'op64': 7, 'op65': 7, 'op66': 7, 'op67': 7, 'op68': 7, 'op69': 7, 'op70': 7, 'op71': 6, 'op72': 6, 'op73': 6, 'op74': 6, 'op75': 6, 'op76': 6, 'op77': 5, 'scale': 12, 'op78': 1, 'op79': 1, 'op80': 1, 'op81': 1, 'op82': 1, 'op83': 1, 'op84': 1, 'op85': 1, 'op86': 1, 'op87': 1, 'op88': 1, 'op89': 1, 'op90': 1, 'op91': 1, 'op92': 1, 'op93': 1, 'op94': 1, 'op95': 1, 'op96': 1, 'op97': 1, 'op98': 1, 'op99': 1, 'op100': 1, 'op101': 1, 'op102': 1, 'op103': 1, 'op104': 1, 'op105': 1, 'op106': 1, 'op107': 1, 'op108': 1, 'op109': 1, 'op110': 1, 'op111': 1, 'op112': 1, 'op113': 1, 'op114': 1, 'op115': 1, 'op116': 1, 'op117': 1, 'op118': 1, 'op119': 1, 'op120': 1, 'op121': 1, 'prep-on': 4}
    Traceback (most recent call last):
      File "amr2dglgraph.py", line 154, in <module>
        amr2dglgraph("../data/wikievents/transfer-train.jsonl", "amr-wikievent-train.pkl", "../data/wikievent/dglgraph-wikievent-train.pkl")
      File "amr2dglgraph.py", line 146, in amr2dglgraph
        torch.save(graphs_list, graph_path)
      File "/home/bb/.local/lib/python3.7/site-packages/torch/serialization.py", line 376, in save
        with _open_file_like(f, 'wb') as opened_file:
      File "/home/bb/.local/lib/python3.7/site-packages/torch/serialization.py", line 230, in _open_file_like
        return _open_file(name_or_buffer, mode)
      File "/home/bb/.local/lib/python3.7/site-packages/torch/serialization.py", line 211, in __init__
        super(_open_file, self).__init__(open(name, mode))
    FileNotFoundError: [Errno 2] No such file or directory: '../data/wikievent/dglgraph-wikievent-train.pkl'
    
    opened by OPilgrim 0
  • Confusion about RAMS datasets

    Confusion about RAMS datasets

    ๅพฎไฟกๅ›พ็‰‡_20220731140404 Hello, Thanks for your code! I see #Doc is much smaller than #Event from Table 1, indicating that a document can contain multiple events. So is there a clear boundary between these events, that is, whether different events under the same document will share arguments? In addition, I found that the doc_key of each instance in the jsonlines is unique. How do you count the number of documents (3194,399 and 400)? Any help would be great.

    opened by bellytina 2
  • How to get the AMR result of the RAMS dataset

    How to get the AMR result of the RAMS dataset

    Hi @RunxinXu,

    It seems that amrparse.py takes the transferred RAMS dataset as the input, which is similar to the WikiEvent dataset. However, there's no script for transferring the RAMS dataset. I tried to use the original RAMS dataset as the input of amrparse.py. but it seems that some sentences are too long, which causes an IndexError. Here's the detail of the error

    Traceback (most recent call last): File "/home/kxie/dh39/kyxie/transition-amr-parser-master/parser/amrparse.py", line 42, in parse_rams(parser, split) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/parser/amrparse.py", line 17, in parse_rams amr_list = parser.parse_sentences(all_sentences) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/transition_amr_parser/stack_transformer_amr_parser.py", line 422, in parse_sentences roberta_batch_size) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/transition_amr_parser/stack_transformer_amr_parser.py", line 297, in convert_sentences_to_data self.get_bert_features_batched(sentences, roberta_batch_size) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/transition_amr_parser/stack_transformer_amr_parser.py", line 270, in get_bert_features_batched batch_data = self.embeddings.extract_batch(batch) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/transition_amr_parser/stack_transformer/pretrained_embeddings.py", line 310, in extract_batch word_features = get_average_embeddings(roberta_features.unsqueeze(0), word2piece) File "/home/kxie/dh39/kyxie/transition-amr-parser-master/transition_amr_parser/stack_transformer/pretrained_embeddings.py", line 21, in get_average_embeddings column = final_layer[0:1, wordpiece_idx, :] IndexError: index 510 is out of bounds for dimension 0 with size 510

    In terms of getting the AMR result of the original RAMS dataset, do you have any suggestions?

    opened by XxscXie 6
  • Question: Is the

    Question: Is the "gold_event_links" necessary for inference

    Hi @RunxinXu, I would like to check if the "gold_event_links" are necessary for inference? For example for a test data, the event trigger might be known but not the gold event links. I tried to hide the gold event links in the test data by setting "gold_event_links" to empty array and trying to see what result from the model prediction provides, but it seems to always 0s for labels and empty array for span.

    opened by jefflink 4
Owner
ไบบ็”Ÿ่‹ฆ็Ÿญ ๅŠๆ—ถ่กŒไน
null
QHack-2022 - Solutions to the Coding Challenges of QHack 2022

QHack 2022 Problems from Coding Challenges 2022. Rules and how it works To test

Isacco Gobbi 1 Feb 14, 2022
CVE-2022-22536 - SAP memory pipes(MPI) desynchronization vulnerability CVE-2022-22536

CVE-2022-22536 SAP memory pipes desynchronization vulnerability(MPI) CVE-2022-22

antx 49 Nov 9, 2022
Cve-2022-23131 - Cve-2022-23131 zabbix-saml-bypass-exp

cve-2022-23131 cve-2022-23131 zabbix-saml-bypass-exp replace [zbx_signed_session

ไธœๆ–นๆœ‰้ฑผๅไธบๅ’ธ 135 Dec 14, 2022
SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities.

The SCodeScanner stands for Source Code Scanner, where you can scan your source code files like PHP and get identify the vulnerabilities inside it. The tool can use by Pentester, Developer to quickly identify the weakness.

null 136 Dec 13, 2022
2022-bridge - Example code belonging to the Bridge pattern video

Let's Take The Bridge Pattern To The Next Level This video covers how the bridge

null 11 Jun 14, 2022
HTTP Protocol Stack Remote Code Execution Vulnerability CVE-2022-21907

CVE-2022-21907 Description POC for CVE-2022-21907: HTTP Protocol Stack Remote Code Execution Vulnerability. create by antx at 2022-01-17. Detail HTTP

่ต›ๆฌงๆ€็ฝ‘็ปœๅฎ‰ๅ…จ็ ”็ฉถๅฎž้ชŒๅฎค 365 Nov 30, 2022
Windows Server 2016, 2019, 2022 Extracter & Recovery

Parsing files from Deduplicated volumes. It can also recover deleted files from NTFS Filesystem that were deduplicated. Installation git clone https:/

null 0 Aug 28, 2022
Proof of concept of CVE-2022-21907 Double Free in http.sys driver, triggering a kernel crash on IIS servers

CVE-2022-21907 - Double Free in http.sys driver Summary An unauthenticated attacker can send an HTTP request with an "Accept-Encoding" HTTP request he

Podalirius 71 Dec 22, 2022
This repository detects a system vulnerable to CVE-2022-21907 and protects against this vulnerability if desired

This repository detects a system vulnerable to CVE-2022-21907 and protects against this vulnerability if desired

null 26 Dec 26, 2022
CVE-2022-21907 Vulnerability PoC

CVE-2022-21907 Description POC for CVE-2022-21907: HTTP Protocol Stack Remote Code Execution Vulnerability. create by antx at 2022-01-17, just some sm

Michele 16 Dec 18, 2022
AttractionFinder - 2022 State Qualified FBLA Attraction Finder Application

Attraction Finder Developers: Riyon Praveen, Aaron Bijoy, & Yash Vora How It Wor

$ky 2 Feb 9, 2022
Valeria stealer- - (4Feb 2022) program detects wifi saved passwords in your ROM

Valeria_stealer- Requirements : python 3.9.2 and higher (4Feb 2022) program dete

Mikhail Yolkin 3 May 5, 2022
A Python script that can be used to check if a SAP system is affected by CVE-2022-22536

Vulnerability assessment for CVE-2022-22536 This repository contains a Python script that can be used to check if a SAP system is affected by CVE-2022

Onapsis Inc. 42 Dec 1, 2022
CVE-2022-23046 - SQL Injection Vulnerability on PhpIPAM v1.4.4

CVE-2022-23046 PhpIPAM v1.4.4 allows an authenticated admin user to inject SQL s

null 2 Feb 15, 2022
spring-cloud-gateway-rce CVE-2022-22947

Spring Cloud Gateway Actuator API SpEL่กจ่พพๅผๆณจๅ…ฅๅ‘ฝไปคๆ‰ง่กŒ๏ผˆCVE-2022-22947๏ผ‰ 1.installation pip3 install -r requirements.txt 2.Usage $ python3 spring-cloud-gateway

k3rwin 10 Sep 28, 2022
"Video Moment Retrieval from Text Queries via Single Frame Annotation" in SIGIR 2022.

ViGA: Video moment retrieval via Glance Annotation This is the official repository of the paper "Video Moment Retrieval from Text Queries via Single F

Ran Cui 38 Dec 31, 2022
Spring-0day/CVE-2022-22965

CVE-2022-22965 Spring Framework/CVE-2022-22965 Vulnerability ID: CVE-2022-22965/CNVD-2022-23942/QVD-2022-1691 Reproduce the vulnerability docker pull

iak 4 Apr 5, 2022