The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Overview

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction

This repo contains the data sets and source code of our paper:

Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions [ACL 2021].

  • We introduce a new ABSA task, named Aspect-Category-Opinion-Sentiment Quadruple (ACOS) Extraction, to extract fine-grained ABSA Quadruples from product reviews;
  • We construct two new datasets for the task, with ACOS quadruple annotations, and benchmark the task with four baseline systems;
  • Our task and datasets provide a good support for discovering implicit opinion targets and implicit opinion expressions in product reviews.

Task

The Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction aims to extract all aspect-category-opinion-sentiment quadruples in a review sentence and provide full support for aspect-based sentiment analysis with implicit aspects and opinions.

Datasets

Two new datasets, Restaurant-ACOS and Laptop-ACOS, are constructed for the ACOS Quadruple Extraction task:

  • Restaurant-ACOS is an extension of the existing SemEval Restaurant dataset, based on which we add the annotation of implicit aspects, implicit opinions, and the quadruples;
  • Laptop-ACOS is a brand new one collected from the Amazon Laptop domain. It has twice size of the SemEval Loptop dataset, and is annotated with quadruples containing all explicit/implicit aspects and opinions.

The following table shows the comparison between our two ACOS Quadruple datasets and existing representative ABSA datasets.

Methods

We benchmark the ACOS Quadruple Extraction task with four baseline systems:

  • Double-Propagation-ACOS
  • JET-ACOS
  • TAS-BERT-ACOS
  • Extract-Classify-ACOS

We provided the source code of Extract-Classify-ACOS. The source code of the other three methods will be provided soon.

Overview of our Extract-Classify-ACOS method. The first step performs aspect-opinion co-extraction, and the second step predicts category-sentiment given the aspect-opinion pairs.

Results

The ACOS quadruple extraction performance of four different systems on the two datasets:

We further investigate the ability of different systems in addressing the implicit aspects/opinion problem:

Citation

If you use the data and code in your research, please cite our paper as follows:

@inproceedings{cai2021aspect,
  title={Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions},
  author={Cai, Hongjie and Xia, Rui and Yu, Jianfei},
  booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
  pages={340--350},
  year={2021}
}
Comments
  • 关于step2编码问题

    关于step2编码问题

    最近拜读了论文,尝试运行时,step2一直报utf-8编码问题,尝试了网上大多数修改方法,仍没有解决,请问有办法破吗(悲)

    • def _read_tsv(cls, input_file, quotechar=None):
      
    •     """Reads a tab separated value file."""
      
    •     with open(input_file, "r", encoding="utf-8") as f:
      
    •         reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
      
    •         lines = []
      
    •         for line in reader:
      
    •             if sys.version_info[0] == 2:
      
    •                 line = list(str(cell) for cell in line) 
      
    •             lines.append(line)
      
    •         return lines
      
    opened by betterwater 4
  • pickle.UnpicklingError

    pickle.UnpicklingError

    When I try to run project (sh run.sh) I get:

    Traceback (most recent call last): File "run_step1.py", line 481, in main() File "run_step1.py", line 353, in main model = model_dict[args.model_type].from_pretrained(args.bert_model, num_labels=num_labels) File "/home/filip/ACOS/BERT/pytorch_pretrained_BERT/ACOS-main/Extract-Classify-ACOS/modeling.py", line 721, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location='cpu') File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 713, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, 'v'.

    Does anyone have similar issue? I modified the corresponding BERT_BASE_DIR, BASE_DIR, DATA_DIR and output_dir.

    opened by fikadata 1
  • 数据集aspect,opinion, AO pair规模问题

    数据集aspect,opinion, AO pair规模问题

    作者您好! 感谢你们精彩的工作!复现代码时候碰到一些小问题,例如在Laptop-ACOS (ours)中有关aspect,opinion, AO pair的数量,统计结果如下 {'asp_cont': 4519, 'opi_cont': 4190, 'pair_cont': 3278, 'sentence_num': 4076, 'sentence_with_pair_num': 2285} 与文中的数量偏少,不知是否release错了数据集?或者说没有显示出现在原文中的(NULL)也算作是aspect或者opinion了吗?

    opened by liuyijiang1994 1
  • Issues for run_step2.py

    Issues for run_step2.py

    当训练到步骤二的时候,Eval阶段的输出都是0

    12/15/2021 19:36:24 - INFO - __main__ -   ***** Running training *****
    Epoch:   0%|                                                                                                                                                        | 0/1 [00:00<?, ?it/s]12/15/2021 19:36:32 - INFO - __main__ -   Total Loss is 0.707695484161377 .
    12/15/2021 19:36:33 - INFO - __main__ -   Total Loss is 0.17088936269283295 .
    12/15/2021 19:36:35 - INFO - __main__ -   Total Loss is 0.11365362256765366 .
    12/15/2021 19:36:36 - INFO - __main__ -   Total Loss is 0.09475410729646683 .
    12/15/2021 19:36:38 - INFO - __main__ -   Total Loss is 0.10162457078695297 .
    12/15/2021 19:36:39 - INFO - __main__ -   Total Loss is 0.0938352420926094 .
    12/15/2021 19:36:40 - INFO - __main__ -   Total Loss is 0.10106469690799713 .
    12/15/2021 19:36:42 - INFO - __main__ -   Total Loss is 0.1062822937965393 .
    12/15/2021 19:36:43 - INFO - __main__ -   Total Loss is 0.10448531806468964 .
    12/15/2021 19:36:45 - INFO - __main__ -   Total Loss is 0.09545283764600754 .
    12/15/2021 19:36:46 - INFO - __main__ -   Total Loss is 0.08869557082653046 .
    12/15/2021 19:36:48 - INFO - __main__ -   Total Loss is 0.09898055344820023 .
    12/15/2021 19:36:49 - INFO - __main__ -   Total Loss is 0.096546471118927 .
    12/15/2021 19:36:51 - INFO - __main__ -   Total Loss is 0.09676332026720047 .
    12/15/2021 19:36:52 - INFO - __main__ -   Total Loss is 0.09095398336648941 .
    12/15/2021 19:36:54 - INFO - __main__ -   Total Loss is 0.095858633518219 .
    Quad num: 0
    tp: 0.0. fp: 0.0. fn: 251.0.
    12/15/2021 19:36:54 - INFO - __main__ -   ***** Eval results *****
    12/15/2021 19:36:54 - INFO - __main__ -     micro-F1 = 0
    12/15/2021 19:36:54 - INFO - __main__ -     precision = 0
    12/15/2021 19:36:54 - INFO - __main__ -     recall = 0.0
    Quad num: 0
    tp: 0.0. fp: 0.0. fn: 895.0.
    tp: 0.0. fp: 0.0. fn: 490.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 142.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 98.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 102.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 715.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 399.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 123.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 85.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 95.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 623.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 497.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 144.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 101.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 103.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 725.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 580.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 153.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 139.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 98.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 804.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** aspect results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 596.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 160.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 144.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 103.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 827.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category aspect results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 589.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 158.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 141.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 101.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 818.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment aspect results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 600.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 161.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 145.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 103.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 832.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment aspect results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 580.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 150.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 113.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 102.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 811.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 595.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 154.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 116.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 103.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 827.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 580.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 150.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 114.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 103.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 812.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 595.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 154.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 117.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 104.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 828.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 659.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 167.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 147.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 104.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 895.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** aspect opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 659.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 167.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 147.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 104.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 895.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category aspect opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 659.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 167.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 147.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 104.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 895.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment aspect opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    tp: 0.0. fp: 0.0. fn: 659.0.
    0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 167.0.
    1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 147.0.
    2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 104.0.
    3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    tp: 0.0. fp: 0.0. fn: 895.0.
    4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
    12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment aspect opinion results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
    12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
    12/15/2021 19:37:00 - INFO - __main__ -   ***** Test results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.0
    Epoch: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:28<00:00, 28.83s/it]
    12/15/2021 19:37:00 - INFO - __main__ -   ***** Test results *****
    12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0
    12/15/2021 19:37:00 - INFO - __main__ -     precision = 0
    12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.0
    
    opened by shenjing023 6
  • issues for step1 eval_metrics.py

    issues for step1 eval_metrics.py

    你好,step1\eval_metrics.py line 115-127 生成pred_tag,line 115 for i in range(len(pred_aspect_tag)):此时pred_aspect_tag是前一步 batch处理后生成的,所以长度是最终batch数,且pred_aspect_tag内的数据是tensor形式而不是像前面的cur_quad是列表,cur_aspect_tag = ''.join(str(ele) for ele in pred_aspect_tag[i])是对一批tensor进行了拼接?这里是不是要将tensor转成列表,针对每句aspect id进行拼接并识别其中的“32*”和“54*”呢?

    opened by wangbidong 3
Owner
NUSTM
Text Mining Group, Nanjing University of Science & Technology
NUSTM
null 190 Jan 3, 2023
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 5, 2023
code associated with ACL 2021 DExperts paper

DExperts Hi! This repository contains code for the paper DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts to appear at

Alisa Liu 68 Dec 15, 2022
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP ?? Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 6, 2022
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 1, 2023
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

null 124 Dec 27, 2022
A sample pytorch Implementation of ACL 2021 research paper "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE-Pytorch This repository is a pytorch version that implements Ali's ACL 2021 research paper Learning Span-Level Interactions for Aspect Senti

来自丹麦的天籁 10 Dec 6, 2022
Code for the paper "Balancing Training for Multilingual Neural Machine Translation, ACL 2020"

Balancing Training for Multilingual Neural Machine Translation Implementation of the paper Balancing Training for Multilingual Neural Machine Translat

Xinyi Wang 21 May 18, 2022
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

Robotics and Perception Group 69 Dec 26, 2022
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".

Code for "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval" (ACL 2021, Long) This is the repository for baseline m

Akari Asai 25 Oct 30, 2022
[ACL-IJCNLP 2021] Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

CLNER The code is for our ACL-IJCNLP 2021 paper: Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning CLNER is a

null 71 Dec 8, 2022
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 7, 2022