Implementation for our AAAI2021 paper (Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction).

Related tags

Deep Learning SSAN
Overview

SSAN

Introduction

This is the pytorch implementation of the SSAN model (see our AAAI2021 paper: Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction).
SSAN (Structured Self-Attention Network) is a novel extension of Transformer to effectively incorporate structural dependencies between input elements. And in the scenerio of document-level relation extraction, we consider the structure of entities. Specificly, we propose a transformation module, that produces attentive biases based on the structure prior so as to adaptively regularize the attention flow within and throughout the encoding stage. We achieve SOTA results on several document-level relation extraction tasks.
This implementation is adapted based on huggingface transformers, the key revision is how we extend the vanilla self-attention of Transformers, you can find the SSAN model details in ./model/modeling_bert.py#L267-L280. You can also find our paddlepaddle implementation in here.

Tagging Strategy

Requirements

  • python3.6, transformers==2.7.0
  • This implementation is tested on a single 32G V100 GPU with CUDA version=10.2 and Driver version=440.33.01.

Prepare Model and Dataset

  • Download pretrained models into ./pretrained_lm. For example, if you want to reproduce the results based on RoBERTa Base, you can download and keep the model files as:
    pretrained_lm
    └─── roberta_base
         ├── pytorch_model.bin
         ├── vocab.json
         ├── config.json
         └── merges.txt

Note that these files should correspond to huggingface transformers of version 2.7.0. Or the code will automatically download from s3 into your --cache_dir.

  • Download DocRED dataset into ./data, including train_annotated.json, dev.json and test.json.

Train

  • Choose your model and config the script:
    Choose --model_type from [roberta, bert], choose --entity_structure from [none, decomp, biaffine]. For SciBERT, you should set --model_type as bert, and then add do_lower_case action.
  • Then run training script:
sh train.sh

checkpoints will be saved into ./checkpoints, and the best threshold for relation prediction will be searched on dev set and printed when evaluation.

Predict

Set --checkpoint and --predict_thresh then run script:

sh predict.sh

The result will be saved as ${checkpoint}/result.json.
You can compress and upload it to the official competition leaderboard at CodaLab.

zip result.zip result.json

Citation (Arxiv version, waiting for the official proceeding.)

If you use any source code included in this project in your work, please cite the following paper:

@misc{xu2021entity,
      title={Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction}, 
      author={Benfeng Xu and Quan Wang and Yajuan Lyu and Yong Zhu and Zhendong Mao},
      year={2021},
      eprint={2102.10249},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Comments
  • 你好,得到result.json 文件后,这个文件该如何理解?

    你好,得到result.json 文件后,这个文件该如何理解?

    {"title": "Miguel Riofr\u00edo", "h_idx": 0, "t_idx": 1, "r": "P569", "evidence": []} 我得到title: "Miguel Riofr\u00edo 有多个,他们是哪一句在哪里体现?得到的文件究竟该如何理解?

    opened by eve1104 4
  • reproduce results

    reproduce results

    Hi there,

    Thanks for your great work. I'm trying to reproduce the SSAN results based on roberta_large. My command is like below:

    python ./run_docred.py --model_type roberta --entity_structure biaffine --model_name_or_path ./pretrained_lm/roberta_large/ --do_train --do_eval --data_dir ./data/DocRED/ --max_seq_length 512 --max_ent_cnt 42 --per_gpu_train_batch_size 4 --learning_rate 3e-5 --num_train_epochs 40 --warmup_ratio 0.1 --output_dir checkpoints --seed 42 --logging_steps 10

    But I got dev_F1 as 0.029, I was wondering if anything went wrong with my args. Could u help me check with this? Thanks.

    opened by jiag19 4
  • 请问dataset.py中的distance_buckets 的值是怎么确定出来的?

    请问dataset.py中的distance_buckets 的值是怎么确定出来的?

    我们知道,distance_buckets 是用于计算各个实体之间的相对距离,但是这个distance_buckets中的值是怎么获取的呢?就是下面这一部分代码: distance_buckets[1] = 1 distance_buckets[2:] = 2 distance_buckets[4:] = 3 distance_buckets[8:] = 4 distance_buckets[16:] = 5 distance_buckets[32:] = 6 distance_buckets[64:] = 7 distance_buckets[128:] = 8 distance_buckets[256:] = 9

    opened by LawsonAbs 2
  • Empty Evidence List

    Empty Evidence List

    Hi,

    Congratulations on your great work!

    I ran the predict.sh code for the results on the test.json. The results.json is created but the evidence for an identified relation is an empty list. Could you explain why that happens?

    opened by snehasinghania 2
  • Weights of RobertaForDocRED not initialized from pertained model warning

    Weights of RobertaForDocRED not initialized from pertained model warning

    Hello, I'm trying to reproduce the paper results with roberta and for both roberta-base and roberta-large after cloning the repositories into pretrained_lm, I am receiving a warning which suggests that none of the weights are being loaded.

    The md5sum for roberta-base pytorch_model.bin is 73db58b6c51b028e0ee031f12261b51d The md5sum for roberta-large pytorch_model.bin is 234a3b27e09d3486d7719c66ba1aaa31

    I am using package versions:

    torch==1.9.0
    torchcontrib==0.0.2
    transformers==2.7.0
    

    Can you advise on what to do or provide a link to where you are downloading the pretrained models?

    The warning for roberta-large is as follows (all 24 layers present in this warning):

    07/28/2021 19:48:35 - INFO - transformers.modeling_utils -   Weights of RobertaForDocRED not initialized from pretrained model: ['roberta.embeddings.ner_emb.weight', 'roberta.embeddings.ent_emb.weight', 'roberta.encoder.layer.0.attention.self.bili.0', 'roberta.encoder.layer.0.attention.self.bili.1', 'roberta.encoder.layer.0.attention.self.bili.2', 'roberta.encoder.layer.0.attention.self.bili.3', 'roberta.encoder.layer.0.attention.self.bili.4', 'roberta.encoder.layer.0.attention.self.abs_bias.0', 'roberta.encoder.layer.0.attention.self.abs_bias.1', 'roberta.encoder.layer.0.attention.self.abs_bias.2', 'roberta.encoder.layer.0.attention.self.abs_bias.3', 'roberta.encoder.layer.0.attention.self.abs_bias.4', 'roberta.encoder.layer.1.attention.self.bili.0', 'roberta.encoder.layer.1.attention.self.bili.1', 'roberta.encoder.layer.1.attention.self.bili.2', 'roberta.encoder.layer.1.attention.self.bili.3', 'roberta.encoder.layer.1.attention.self.bili.4', 'roberta.encoder.layer.1.attention.self.abs_bias.0', 'roberta.encoder.layer.1.attention.self.abs_bias.1', 'roberta.encoder.layer.1.attention.self.abs_bias.2', 'roberta.encoder.layer.1.attention.self.abs_bias.3', 'roberta.encoder.layer.1.attention.self.abs_bias.4', 'roberta.encoder.layer.2.attention.self.bili.0', 'roberta.encoder.layer.2.attention.self.bili.1', 'roberta.encoder.layer.2.attention.self.bili.2', 'roberta.encoder.layer.2.attention.self.bili.3', 'roberta.encoder.layer.2.attention.self.bili.4', 'roberta.encoder.layer.2.attention.self.abs_bias.0', 'roberta.encoder.layer.2.attention.self.abs_bias.1', 'roberta.encoder.layer.2.attention.self.abs_bias.2', 'roberta.encoder.layer.2.attention.self.abs_bias.3', 'roberta.encoder.layer.2.attention.self.abs_bias.4', 'roberta.encoder.layer.3.attention.self.bili.0', 'roberta.encoder.layer.3.attention.self.bili.1', 'roberta.encoder.layer.3.attention.self.bili.2', 'roberta.encoder.layer.3.attention.self.bili.3', 'roberta.encoder.layer.3.attention.self.bili.4', 'roberta.encoder.layer.3.attention.self.abs_bias.0', 'roberta.encoder.layer.3.attention.self.abs_bias.1', 'roberta.encoder.layer.3.attention.self.abs_bias.2', 'roberta.encoder.layer.3.attention.self.abs_bias.3', 'roberta.encoder.layer.3.attention.self.abs_bias.4', 'roberta.encoder.layer.4.attention.self.bili.0', 'roberta.encoder.layer.4.attention.self.bili.1', 'roberta.encoder.layer.4.attention.self.bili.2', 'roberta.encoder.layer.4.attention.self.bili.3', 'roberta.encoder.layer.4.attention.self.bili.4', 'roberta.encoder.layer.4.attention.self.abs_bias.0', 'roberta.encoder.layer.4.attention.self.abs_bias.1', 'roberta.encoder.layer.4.attention.self.abs_bias.2', 'roberta.encoder.layer.4.attention.self.abs_bias.3', 'roberta.encoder.layer.4.attention.self.abs_bias.4', 'roberta.encoder.layer.5.attention.self.bili.0', 'roberta.encoder.layer.5.attention.self.bili.1', 'roberta.encoder.layer.5.attention.self.bili.2', 'roberta.encoder.layer.5.attention.self.bili.3', 'roberta.encoder.layer.5.attention.self.bili.4', 'roberta.encoder.layer.5.attention.self.abs_bias.0', 'roberta.encoder.layer.5.attention.self.abs_bias.1', 'roberta.encoder.layer.5.attention.self.abs_bias.2', 'roberta.encoder.layer.5.attention.self.abs_bias.3', 'roberta.encoder.layer.5.attention.self.abs_bias.4', 'roberta.encoder.layer.6.attention.self.bili.0', 'roberta.encoder.layer.6.attention.self.bili.1', 'roberta.encoder.layer.6.attention.self.bili.2', 'roberta.encoder.layer.6.attention.self.bili.3', 'roberta.encoder.layer.6.attention.self.bili.4', 'roberta.encoder.layer.6.attention.self.abs_bias.0', 'roberta.encoder.layer.6.attention.self.abs_bias.1', 'roberta.encoder.layer.6.attention.self.abs_bias.2', 'roberta.encoder.layer.6.attention.self.abs_bias.3', 'roberta.encoder.layer.6.attention.self.abs_bias.4', 'roberta.encoder.layer.7.attention.self.bili.0', 'roberta.encoder.layer.7.attention.self.bili.1', 'roberta.encoder.layer.7.attention.self.bili.2', 'roberta.encoder.layer.7.attention.self.bili.3', 'roberta.encoder.layer.7.attention.self.bili.4', 'roberta.encoder.layer.7.attention.self.abs_bias.0', 'roberta.encoder.layer.7.attention.self.abs_bias.1', 'roberta.encoder.layer.7.attention.self.abs_bias.2', 'roberta.encoder.layer.7.attention.self.abs_bias.3', 'roberta.encoder.layer.7.attention.self.abs_bias.4', 'roberta.encoder.layer.8.attention.self.bili.0', 'roberta.encoder.layer.8.attention.self.bili.1', 'roberta.encoder.layer.8.attention.self.bili.2', 'roberta.encoder.layer.8.attention.self.bili.3', 'roberta.encoder.layer.8.attention.self.bili.4', 'roberta.encoder.layer.8.attention.self.abs_bias.0', 'roberta.encoder.layer.8.attention.self.abs_bias.1', 'roberta.encoder.layer.8.attention.self.abs_bias.2', 'roberta.encoder.layer.8.attention.self.abs_bias.3', 'roberta.encoder.layer.8.attention.self.abs_bias.4', 'roberta.encoder.layer.9.attention.self.bili.0', 'roberta.encoder.layer.9.attention.self.bili.1', 'roberta.encoder.layer.9.attention.self.bili.2', 'roberta.encoder.layer.9.attention.self.bili.3', 'roberta.encoder.layer.9.attention.self.bili.4', 'roberta.encoder.layer.9.attention.self.abs_bias.0', 'roberta.encoder.layer.9.attention.self.abs_bias.1', 'roberta.encoder.layer.9.attention.self.abs_bias.2', 'roberta.encoder.layer.9.attention.self.abs_bias.3', 'roberta.encoder.layer.9.attention.self.abs_bias.4', 'roberta.encoder.layer.10.attention.self.bili.0', 'roberta.encoder.layer.10.attention.self.bili.1', 'roberta.encoder.layer.10.attention.self.bili.2', 'roberta.encoder.layer.10.attention.self.bili.3', 'roberta.encoder.layer.10.attention.self.bili.4', 'roberta.encoder.layer.10.attention.self.abs_bias.0', 'roberta.encoder.layer.10.attention.self.abs_bias.1', 'roberta.encoder.layer.10.attention.self.abs_bias.2', 'roberta.encoder.layer.10.attention.self.abs_bias.3', 'roberta.encoder.layer.10.attention.self.abs_bias.4', 'roberta.encoder.layer.11.attention.self.bili.0', 'roberta.encoder.layer.11.attention.self.bili.1', 'roberta.encoder.layer.11.attention.self.bili.2', 'roberta.encoder.layer.11.attention.self.bili.3', 'roberta.encoder.layer.11.attention.self.bili.4', 'roberta.encoder.layer.11.attention.self.abs_bias.0', 'roberta.encoder.layer.11.attention.self.abs_bias.1', 'roberta.encoder.layer.11.attention.self.abs_bias.2', 'roberta.encoder.layer.11.attention.self.abs_bias.3', 'roberta.encoder.layer.11.attention.self.abs_bias.4', 'roberta.encoder.layer.12.attention.self.bili.0', 'roberta.encoder.layer.12.attention.self.bili.1', 'roberta.encoder.layer.12.attention.self.bili.2', 'roberta.encoder.layer.12.attention.self.bili.3', 'roberta.encoder.layer.12.attention.self.bili.4', 'roberta.encoder.layer.12.attention.self.abs_bias.0', 'roberta.encoder.layer.12.attention.self.abs_bias.1', 'roberta.encoder.layer.12.attention.self.abs_bias.2', 'roberta.encoder.layer.12.attention.self.abs_bias.3', 'roberta.encoder.layer.12.attention.self.abs_bias.4', 'roberta.encoder.layer.13.attention.self.bili.0', 'roberta.encoder.layer.13.attention.self.bili.1', 'roberta.encoder.layer.13.attention.self.bili.2', 'roberta.encoder.layer.13.attention.self.bili.3', 'roberta.encoder.layer.13.attention.self.bili.4', 'roberta.encoder.layer.13.attention.self.abs_bias.0', 'roberta.encoder.layer.13.attention.self.abs_bias.1', 'roberta.encoder.layer.13.attention.self.abs_bias.2', 'roberta.encoder.layer.13.attention.self.abs_bias.3', 'roberta.encoder.layer.13.attention.self.abs_bias.4', 'roberta.encoder.layer.14.attention.self.bili.0', 'roberta.encoder.layer.14.attention.self.bili.1', 'roberta.encoder.layer.14.attention.self.bili.2', 'roberta.encoder.layer.14.attention.self.bili.3', 'roberta.encoder.layer.14.attention.self.bili.4', 'roberta.encoder.layer.14.attention.self.abs_bias.0', 'roberta.encoder.layer.14.attention.self.abs_bias.1', 'roberta.encoder.layer.14.attention.self.abs_bias.2', 'roberta.encoder.layer.14.attention.self.abs_bias.3', 'roberta.encoder.layer.14.attention.self.abs_bias.4', 'roberta.encoder.layer.15.attention.self.bili.0', 'roberta.encoder.layer.15.attention.self.bili.1', 'roberta.encoder.layer.15.attention.self.bili.2', 'roberta.encoder.layer.15.attention.self.bili.3', 'roberta.encoder.layer.15.attention.self.bili.4', 'roberta.encoder.layer.15.attention.self.abs_bias.0', 'roberta.encoder.layer.15.attention.self.abs_bias.1', 'roberta.encoder.layer.15.attention.self.abs_bias.2', 'roberta.encoder.layer.15.attention.self.abs_bias.3', 'roberta.encoder.layer.15.attention.self.abs_bias.4', 'roberta.encoder.layer.16.attention.self.bili.0', 'roberta.encoder.layer.16.attention.self.bili.1', 'roberta.encoder.layer.16.attention.self.bili.2', 'roberta.encoder.layer.16.attention.self.bili.3', 'roberta.encoder.layer.16.attention.self.bili.4', 'roberta.encoder.layer.16.attention.self.abs_bias.0', 'roberta.encoder.layer.16.attention.self.abs_bias.1', 'roberta.encoder.layer.16.attention.self.abs_bias.2', 'roberta.encoder.layer.16.attention.self.abs_bias.3', 'roberta.encoder.layer.16.attention.self.abs_bias.4', 'roberta.encoder.layer.17.attention.self.bili.0', 'roberta.encoder.layer.17.attention.self.bili.1', 'roberta.encoder.layer.17.attention.self.bili.2', 'roberta.encoder.layer.17.attention.self.bili.3', 'roberta.encoder.layer.17.attention.self.bili.4', 'roberta.encoder.layer.17.attention.self.abs_bias.0', 'roberta.encoder.layer.17.attention.self.abs_bias.1', 'roberta.encoder.layer.17.attention.self.abs_bias.2', 'roberta.encoder.layer.17.attention.self.abs_bias.3', 'roberta.encoder.layer.17.attention.self.abs_bias.4', 'roberta.encoder.layer.18.attention.self.bili.0', 'roberta.encoder.layer.18.attention.self.bili.1', 'roberta.encoder.layer.18.attention.self.bili.2', 'roberta.encoder.layer.18.attention.self.bili.3', 'roberta.encoder.layer.18.attention.self.bili.4', 'roberta.encoder.layer.18.attention.self.abs_bias.0', 'roberta.encoder.layer.18.attention.self.abs_bias.1', 'roberta.encoder.layer.18.attention.self.abs_bias.2', 'roberta.encoder.layer.18.attention.self.abs_bias.3', 'roberta.encoder.layer.18.attention.self.abs_bias.4', 'roberta.encoder.layer.19.attention.self.bili.0', 'roberta.encoder.layer.19.attention.self.bili.1', 'roberta.encoder.layer.19.attention.self.bili.2', 'roberta.encoder.layer.19.attention.self.bili.3', 'roberta.encoder.layer.19.attention.self.bili.4', 'roberta.encoder.layer.19.attention.self.abs_bias.0', 'roberta.encoder.layer.19.attention.self.abs_bias.1', 'roberta.encoder.layer.19.attention.self.abs_bias.2', 'roberta.encoder.layer.19.attention.self.abs_bias.3', 'roberta.encoder.layer.19.attention.self.abs_bias.4', 'roberta.encoder.layer.20.attention.self.bili.0', 'roberta.encoder.layer.20.attention.self.bili.1', 'roberta.encoder.layer.20.attention.self.bili.2', 'roberta.encoder.layer.20.attention.self.bili.3', 'roberta.encoder.layer.20.attention.self.bili.4', 'roberta.encoder.layer.20.attention.self.abs_bias.0', 'roberta.encoder.layer.20.attention.self.abs_bias.1', 'roberta.encoder.layer.20.attention.self.abs_bias.2', 'roberta.encoder.layer.20.attention.self.abs_bias.3', 'roberta.encoder.layer.20.attention.self.abs_bias.4', 'roberta.encoder.layer.21.attention.self.bili.0', 'roberta.encoder.layer.21.attention.self.bili.1', 'roberta.encoder.layer.21.attention.self.bili.2', 'roberta.encoder.layer.21.attention.self.bili.3', 'roberta.encoder.layer.21.attention.self.bili.4', 'roberta.encoder.layer.21.attention.self.abs_bias.0', 'roberta.encoder.layer.21.attention.self.abs_bias.1', 'roberta.encoder.layer.21.attention.self.abs_bias.2', 'roberta.encoder.layer.21.attention.self.abs_bias.3', 'roberta.encoder.layer.21.attention.self.abs_bias.4', 'roberta.encoder.layer.22.attention.self.bili.0', 'roberta.encoder.layer.22.attention.self.bili.1', 'roberta.encoder.layer.22.attention.self.bili.2', 'roberta.encoder.layer.22.attention.self.bili.3', 'roberta.encoder.layer.22.attention.self.bili.4', 'roberta.encoder.layer.22.attention.self.abs_bias.0', 'roberta.encoder.layer.22.attention.self.abs_bias.1', 'roberta.encoder.layer.22.attention.self.abs_bias.2', 'roberta.encoder.layer.22.attention.self.abs_bias.3', 'roberta.encoder.layer.22.attention.self.abs_bias.4', 'roberta.encoder.layer.23.attention.self.bili.0', 'roberta.encoder.layer.23.attention.self.bili.1', 'roberta.encoder.layer.23.attention.self.bili.2', 'roberta.encoder.layer.23.attention.self.bili.3', 'roberta.encoder.layer.23.attention.self.bili.4', 'roberta.encoder.layer.23.attention.self.abs_bias.0', 'roberta.encoder.layer.23.attention.self.abs_bias.1', 'roberta.encoder.layer.23.attention.self.abs_bias.2', 'roberta.encoder.layer.23.attention.self.abs_bias.3', 'roberta.encoder.layer.23.attention.self.abs_bias.4', 'dim_reduction.weight', 'dim_reduction.bias', 'distance_emb.weight', 'bili.weight', 'bili.bias']
    
    opened by nickmarton 2
  • Code doesn't run - RAM fills up too quickly

    Code doesn't run - RAM fills up too quickly

    The code did not run on my 16GB RAM, 12GB x2 GeForce GTX TITAN X. It would get stuck while training at 0%. So i tried to replicate the same in google colab but it kept getting killed even when I had 35GB of RAM.

    I added a line to logger in dataset.py to show me RAM usage using psutil. with every 500 examples, about 4GB of RAM is being used up.

    Am I missing something?

    06/30/2021 11:39:38 - INFO - dataset -   Writing example 0/3053
    06/30/2021 11:39:38 - INFO - dataset -   RAM used: 23%
    06/30/2021 11:39:39 - INFO - dataset -   *** Example ***
    06/30/2021 11:39:39 - INFO - dataset -   guid: train-42
    06/30/2021 11:39:39 - INFO - dataset -   doc: BENS’ early work focused extensively on initiatives aimed at U.S .- Soviet threat reduction and inefficiencies within support functions of the Department of Defense , e.g. , the maintenance and construction of military housing . The organization was also active in BRAC , championing the process and helping develop transition plans for locations affected by base closure . Over the last decade , the organization expanded their focus , addressing issues such as cybersecurity , domestic counterterrorism , and talent management . They have also broadened their partnerships to include other government agencies such as the Departments of State , Treasury , and Homeland Security ; the Office of the Director of National Intelligence ; and the unified combatant commands . Work provided by BENS members is pro bono . The organization 's current president and CEO is retired Air Force General Norton A. Schwartz , the 19th Chief of Staff of the U.S. Air Force , and their current chairman is Norman C. Chambers , former Chairman of NCI Building Systems . Prominent members include Jeff Bezos , John P. Morgridge , and Charles E. Phillips .
    06/30/2021 11:39:39 - INFO - dataset -   input_ids: 0 163 12743 17 27 419 173 2061 18808 15 5287 3448 23 121 4 104 479 12 8297 1856 4878 8 11 38641 18828 624 323 8047 9 5 641 9 4545 2156 364 4 571 4 2156 5 4861 8 1663 9 831 2004 479 20 1651 21 67 2171 11 6823 2562 2156 2234 154 5 609 8 1903 2179 3868 708 13 3237 2132 30 1542 6803 479 2306 5 94 2202 2156 5 1651 4939 49 1056 2156 6477 743 215 25 13468 2156 1897 29756 2156 8 2959 1052 479 252 33 67 4007 4490 49 8670 7 680 97 168 2244 215 25 5 6748 27101 9 331 2156 4732 2156 8 9777 2010 25606 5 1387 9 5 1678 9 496 6558 25606 8 5 16681 5217 927 16388 479 6011 1286 30 163 12743 453 16 1759 13295 139 479 20 1651 128 29 595 394 8 1324 16 3562 1754 3177 1292 18378 83 4 16070 2156 5 753 212 1231 9 5721 9 5 121 4 104 4 1754 3177 2156 8 49 595 2243 16 11704 230 4 13962 2156 320 3356 9 9537 100 6919 5778 479 10772 19336 453 680 2321 17767 2156 610 221 4 4266 6504 15379 2156 8 3163 381 4 7431 479 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    06/30/2021 11:39:39 - INFO - dataset -   attention_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    06/30/2021 11:39:39 - INFO - dataset -   token_type_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    06/30/2021 11:39:39 - INFO - dataset -   ent_mask for first ent: 0.0 0.25 0.25 0.25 0.25 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
    06/30/2021 11:39:39 - INFO - dataset -   label for ent pair 0-1: True False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False
    06/30/2021 11:39:39 - INFO - dataset -   label_mask for first ent: False True True True True True True True True True True True True True True True True True False False False False False False False False False False False False False False False False False False False False False False False False
    06/30/2021 11:39:39 - INFO - dataset -   ent_ner: 0 1 1 1 1 0 0 0 0 0 0 0 0 2 2 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 1 1 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 6 6 6 6 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 6 6 6 6 0 0 0 0 1 1 1 1 0 0 0 0 0 6 6 0 6 6 6 6 6 6 0 0 6 6 6 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    06/30/2021 11:39:39 - INFO - dataset -   ent_pos: 0 1 1 1 1 0 0 0 0 0 0 0 0 2 2 2 0 0 3 0 0 0 0 0 0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 6 0 7 0 0 8 8 0 0 9 9 9 9 9 9 9 0 0 0 0 0 0 0 0 0 0 0 10 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 11 0 12 12 12 12 0 0 0 0 0 0 0 0 0 13 13 13 13 13 13 0 0 0 0 0 0 14 14 14 14 0 0 0 0 15 15 15 15 0 0 0 0 0 16 16 0 17 17 17 17 17 17 0 0 18 18 18 18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    06/30/2021 11:39:39 - INFO - dataset -   ent_distance for first ent: 10 6 5 5 4 3 3 3 3 2 2 2 2 2 2 2 2 2 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
    06/30/2021 11:39:50 - INFO - dataset -   Writing example 500/3053
    06/30/2021 11:39:50 - INFO - dataset -   RAM used: 56%
    Killed
    
    opened by aakashb95 2
  • Model Parameters Size?

    Model Parameters Size?

    Hi,

    Congratulations on your great work!

    May I know they model size of SSAN? I feel that substantial amount of additional parameters are added to the backbone (BERT/RoBerta) due to the Transformation Module. Is this true?

    Thank you!

    opened by jzhang38 1
  • What is the computational complexity of the ATLOP model?

    What is the computational complexity of the ATLOP model?

    I tried to analyze the computational complexity of the ATLOP model but failed because I cannot handle the computational complexity of BERT finetune procedure. Could you please tell me the result? Thank you.

    opened by ThinkNaive 1
  • 论文中的疑惑点

    论文中的疑惑点

    同学你好,在这篇论文中有些许疑惑,想请教一下: 在“Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction“论文中, 论文的主要创新点在于formulate了一个可以反映文档结构的entity-centric adjacency matrix, 在2.3小节也提到"我们利用特定的参数实例化了此矩阵中的元素",并在之后介绍了两种Transformation module。我想知道在此处是如何将adjacency matrix中的元素实例化为具有特定意义的神经层参数的呢?这个实例化过后的矩阵的信息是否包含在公式(6)中的A或公式(7)中的K,Q呢?是否在此过程中结构信息是否有所损失呢?

    期待你的回复!

    opened by chenhaishun 1
  • 关于复现结果的问题

    关于复现结果的问题

    您好,很感谢您所做出的卓越的成绩。 在进行复现时,我是用原代码中的参数设置进行。 在使用base bert;bs=4;epoch=40的情况下,跑出的数据F1为0.496。与论文中数据差异过大。 但是在Paddle实现的代码中,使用Ernie的效果与论文中相近。 因此想了解一下,您论文中的实验也是通过Ernie进行的吗?或者是将基于Bert的Ernie也算作为Bert模型进行的实验。 image 望回复,谢谢。

    opened by JJJiangYH 1
  • Would you share the trained models or result on test data?

    Would you share the trained models or result on test data?

    Hi, Have you shared your trained models anywhere so that we can use them to reproduce the results on test data? If not, would you share the generated result on the test data of DocRED?

    opened by zhn1010 4
Owner
benfeng
PhD Candidate@USTC Intern@Baidu-inc NLP,KG
benfeng
Code for the paper "Relation of the Relations: A New Formalization of the Relation Extraction Problem"

This repo contains the code for the EMNLP 2020 paper "Relation of the Relations: A New Paradigm of the Relation Extraction Problem" (Jin et al., 2020)

YYY 27 Oct 26, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
git《Joint Entity and Relation Extraction with Set Prediction Networks》(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

null 130 Dec 13, 2022
Source code for "Pack Together: Entity and Relation Extraction with Levitated Marker"

PL-Marker Source code for Pack Together: Entity and Relation Extraction with Levitated Marker. Quick links Overview Setup Install Dependencies Data Pr

THUNLP 173 Dec 30, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 4, 2022
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

null 137 Jan 2, 2023
[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License ?? Introduction REval is a simple framework for

null 13 Jan 6, 2023
Code for technical report "An Improved Baseline for Sentence-level Relation Extraction".

RE_improved_baseline Code for technical report "An Improved Baseline for Sentence-level Relation Extraction". Requirements torch >= 1.8.1 transformers

Wenxuan Zhou 74 Nov 29, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Argument Extraction by Generation Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21' Dependencies pytorch=1.6 tr

Zoey Li 87 Dec 26, 2022
Out-of-Town Recommendation with Travel Intention Modeling (AAAI2021)

TrainOR_AAAI21 This is the official implementation of our AAAI'21 paper: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong

Jack Xin 13 Oct 19, 2022
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 2, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

❤️ A Toolkit for Document-level Event Extraction with & without Triggers Hi, there ?? . Thanks for your stay in this repo. This project aims at buildi

Tong Zhu(朱桐) 159 Dec 22, 2022
A pytorch-version implementation codes of paper: "BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation"

BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation A pytorch-version implementation

null 11 Oct 8, 2022
Code and datasets for the paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction"

KnowPrompt Code and datasets for our paper "KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction" Requireme

ZJUNLP 137 Dec 31, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 3, 2023
PyTorch implementation of ARM-Net: Adaptive Relation Modeling Network for Structured Data.

A ready-to-use framework of latest models for structured (tabular) data learning with PyTorch. Applications include recommendation, CRT prediction, healthcare analytics, and etc.

null 48 Nov 30, 2022
Wanli Li and Tieyun Qian: Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction, IJCNN 2021

MRefG Wanli Li and Tieyun Qian: "Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction", IJCNN 2021 1. Requirements To reproduc

万理 5 Jul 26, 2022