Assessing syntactic abilities of BERT

Overview

BERT-Syntax

Assesing the syntactic abilities of BERT.

What

Evaluate Google's BERT-Base and BERT-Large models on the syntactic agreement datasets from Linzen, Goldberg and Dupoux 2016 and Marvin and Linzen 2018 and Gulordava et al 2018.

This is quite messy, as I hacked it together between things here and there. But I also believe it is accurate. This lists the data files and shows how to run the evaluation. For more details and results, see the arxiv report.

Data Files

Data taken from the github repos of Linzen, Goldberg and Dupoux (LGD), Marvin and Linzen (ML), and Gulordava et al.

File Description
marvin_linzen_dataset.tsv stimuli from Marvin and Linzen. I dumped it from the pickle files of ML
wiki.vocab from LGD, used for verb inflections (wiki.vocab)
lgd_dataset.tsv processed data from LGD
generated.tab data from Gulordava et al (generated.tab)

lgd_dataset.tsv is created by

wget http://tallinzen.net/media/rnn_agreement/agr_50_mostcommon_10K.tsv.gz
gunzip agr_50_mostcommon_10K.tsv.gz
python make_linzen_goldberg_testset.py > lgd_dataset.tsv

Obtaining the results

pip install pytorch_pretrained_bert

python eval_bert.py > results/lgd_results_large.txt
python eval_bert.py base > results/lgd_results_base.txt
python eval_bert.py marvin > results/marvin_results_large.txt
python eval_bert.py marvin base > results/marvin_results_base.txt
python eval_bert.py gul > results/gulordava_results_large.txt
python eval_bert.py gul base > results/gulordava_results_base.txt

Generating tables (for the PDF)

python gen_marvin_tbl.py 
python gen_lgd_tbl.py
python gen_gul_tbl.py
You might also like...
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

Code for the ACL2021 paper
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Chinese clinical named entity recognition using pre-trained BERT model

Chinese clinical named entity recognition (CNER) using pre-trained BERT model Introduction Code for paper Chinese clinical named entity recognition wi

The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust.

Demo BERT ONNX pipeline written in rust This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust. R

(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

The code for our paper
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

Converting CPT to bert form for use

cpt-encoder 将CPT转成bert形式使用 说明 刚刚刷到又出了一种模型:CPT,看论文显示,在很多中文任务上性能比mac bert还好,就迫不及待想把它用起来。 根据对源码的研究,发现该模型在做nlu建模时主要用的encoder部分,也就是bert,因此我将这部分权重转为bert权重类型

Comments
  • FileNotFoundError (lgd_dataset_with_is_are)

    FileNotFoundError (lgd_dataset_with_is_are)

    (BERT-Syntax) ant@..:~/.../bert-syntax$ python eval_bert.py | tee results/lgd_results_large.txt using model: bert-large-uncased 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1248501532/1248501532 [18:31<00:00, 1123474.91B/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 231508/231508 [00:00<00:00, 243357.01B/s] Traceback (most recent call last): File "eval_bert.py", line 128, in eval_lgd() File "eval_bert.py", line 81, in eval_lgd for i,line in enumerate(open("lgd_dataset_with_is_are.tsv",encoding="utf8")): FileNotFoundError: [Errno 2] No such file or directory: 'lgd_dataset_with_is_are.tsv'

    opened by antgr 2
  • Modifications to run with python 3

    Modifications to run with python 3

    Modifications to run with python 3: file-> open iteritems -> items

    - for line in file(vocab_file): + for line in open(vocab_file):

    - for word, count in vbz.iteritems(): + for word, count in vbz.items():

    - for word, count in nn.iteritems(): + for word, count in nn.items():

    opened by antgr 0
  • Results, please :)

    Results, please :)

    I have yet to run your code to check the results. However, it'd be great to have the tables available in the README.

    I'd check the paper, but the link to your arxiv report leads to: https://arxiv.org/abs/TBD.

    opened by dav-ell 1
Owner
Yoav Goldberg
Yoav Goldberg
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

null 124 Dec 27, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 1, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 8, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 8, 2022
Code for pre-training CharacterBERT models (as well as BERT models).

Pre-training CharacterBERT (and BERT) This is a repository for pre-training BERT and CharacterBERT. DISCLAIMER: The code was largely adapted from an o

Hicham EL BOUKKOURI 31 Dec 5, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 7, 2022