The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

Related tags

Deep Learning LEAR
Overview

LEAR

The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

**The code is in the "master" branch. I will re-organize the code when I am free.

Comments
  • 关于zh_Ontonote4的复现

    关于zh_Ontonote4的复现

    同学你好,

    我根据论文,使用以下参数运行 CUDA_VISIBLE_DEVICES=0 python run_ner.py --task_type sequence_classification --task_save_name SERS --data_dir ./data/ner --data_name zh_onto4 --model_name SERS --model_name_or_path ../bert_models/chinese-roberta-wwm-ext-large --output_dir ./zh_onto4_models/bert_large --do_lower_case False --result_dir ./zh_onto4_models/results --first_label_file ./data/ner/zh_onto4/processed/label_map.json --overwrite_output_dir True --train_set ./data/ner/zh_onto4/processed/train.json --dev_set ./data/ner/zh_onto4/processed/dev.json --test_set ./data/ner/zh_onto4/processed/test.json --is_chinese True --max_seq_length 128 --per_gpu_train_batch_size 32 --gradient_accumulation_steps 1 --num_train_epochs 5 --learning_rate 8e-6 --task_layer_lr 10 --label_str_file ./data/ner/zh_onto4/processed/label_annotation.txt --span_decode_strategy v5

    在Ontonote4 Chinese上得到 f1: 82.32,与论文上报告的82.95有一些差距,请问这个数值是合理的吗,还是我有运行参数设置不对?

    期待您的回复! 叶德铭

    opened by YeDeming 11
  • 事件检测任务的复现

    事件检测任务的复现

    Hi~

    1. 想问你在多卡条件下跑过事件检测的代码嘛?我用4张卡跑,报了下面的错误:
    Training:   0%|                                                                                                                                                               | 0/1159 [00:22<?, ?it/s]
    Traceback (most recent call last):
      File "run_trigger_extraction.py", line 405, in <module>
        main()
      File "run_trigger_extraction.py", line 378, in main
        train(args, model, processor)
      File "run_trigger_extraction.py", line 243, in train
        pred_sub_heads, pred_sub_tails = model(data, add_label_info=add_label_info)
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
        outputs = self.parallel_apply(replicas, inputs, kwargs)
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
        return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
        output.reraise()
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
        raise exception
    RuntimeError: Caught RuntimeError in replica 0 on device 0.
    Original Traceback (most recent call last):
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
        output = module(*input, **kwargs)
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/yc21/project/LEAR/models/model_event.py", line 635, in forward
        fused_results = self.label_fusing_layer(
      File "/home/yc21/software/anaconda3/envs/lear/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/yc21/project/LEAR/utils/model_utils.py", line 320, in forward
        return self.get_fused_feature_with_attn(token_embs, label_embs, input_mask, label_input_mask, return_scores=return_scores)
      File "/home/yc21/project/LEAR/utils/model_utils.py", line 504, in get_fused_feature_with_attn
        scores = torch.matmul(token_feature_fc, label_feature_t).view(
    RuntimeError: shape '[4, 48, 33, -1]' is invalid for input of size 160512
    
    1. 我用一张卡跑,设置batchsize=2, gradient_accumulation_step=16, 能够跑通,但是得到的结果是train上的loss收敛到了6,dev的f1一直是0。我不知道这只是我的个人问题,还是有其他人也存在这个问题。有童鞋跑出相应的结果了么?

    谢谢大家~

    opened by yc1999 6
  • 可以参考一下您进行ner时候传的参数吗

    可以参考一下您进行ner时候传的参数吗

    可以参考您进行ner时候传的参数吗 python run_ner.py --task_type sequence_classification --task_save_name FLAT_NER --data_dir ./data/data/ner --data_name zh_msra --model_name bert_ner --model_name_or_path bert-base-cased --output_dir ./model --do_lower_case False --result_dir ./model/result --first_label_file ./data/data/ner/zh_msra/processed/label_map.json --overwrite_output_dir TRUE --train_set ./data/data/ner/zh_msra/processed/train.json --dev_set ./data/data/ner/zh_msra/processed/dev.json --test_set ./data/data/ner/zh_msra/processed/test.json

    opened by zlkzzz 5
  • Visualization of the Proposed Model?

    Visualization of the Proposed Model?

    Hi~in your paper, you show us a picture which indicates the non-effective attention mechanism of previous models:

    image

    I am wondering whether you can show us a picture like the above picture to demonstrate the effectiveness of LEAR's attention mechanism? Thank you~

    opened by yc1999 1
  • questions about loss

    questions about loss

    Hi, LEAR is an excellent work. I have encountered some problems with the loss when running the code. The loss is oscillating without convergence. I wonder if you have any ideas about it. Thank you.

    Regards, Chuck

    opened by chuckhope 0
Owner
杨攀
杨攀
Code for our paper Aspect Sentiment Quad Prediction as Paraphrase Generation in EMNLP 2021.

Aspect Sentiment Quad Prediction (ASQP) This repo contains the annotated data and code for our paper Aspect Sentiment Quad Prediction as Paraphrase Ge

Isaac 39 Dec 11, 2022
Codes for our paper "SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge" (EMNLP 2020)

SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge Introduction SentiLARE is a sentiment-aware pre-trained language

null 74 Dec 30, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 1, 2023
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (EMNLP 2020)

Towards Persona-Based Empathetic Conversational Models (PEC) This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (E

Zhong Peixiang 35 Nov 17, 2022
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 1, 2023
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 7, 2023
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 3, 2023
Abstractive opinion summarization system (SelSum) and the largest dataset of Amazon product summaries (AmaSum). EMNLP 2021 conference paper.

Learning Opinion Summarizers by Selecting Informative Reviews This repository contains the codebase and the dataset for the corresponding EMNLP 2021

Arthur Bražinskas 39 Jan 1, 2023
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

Libo Qin 25 Sep 6, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 12 Sep 26, 2021
This repo is the code release of EMNLP 2021 conference paper "Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories".

Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories This repo is the code release of EMNLP 2021 con

null 12 Nov 22, 2022
EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation.

This repository contains data and code for our EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation. Please contact me at [email protected]

null 9 Oct 28, 2022
Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

Path-Generator-QA This is a Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Common

Peifeng Wang 33 Dec 5, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022