Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Overview

BertViz

BertViz is a tool for visualizing attention in the Transformer model, supporting all models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace.

Resources

🕹️ Colab tutorial

✍️ Blog post

📖 Paper

Overview

Head View

The head view visualizes the attention patterns produced by one or more attention heads in a given transformer layer. It is based on the excellent Tensor2Tensor visualization tool by Llion Jones.

🕹 Try out this interactive Colab Notebook with the head view pre-loaded.

head view

The head view supports all models from the Transformers library, including:
BERT: [Notebook] [Colab]
GPT-2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook] (and others)

Model View

The model view provides a birds-eye view of attention across all of the model’s layers and heads.

🕹 Try out this interactive Colab Notebook with the model view pre-loaded.

model view

The model view supports all models from the Transformers library, including:
BERT: [Notebook] [Colab]
GPT2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook] (and others)

Neuron View

The neuron view visualizes the individual neurons in the query and key vectors and shows how they are used to compute attention.

🕹 Try out this interactive Colab Notebook with the neuron view pre-loaded (requires Chrome).

neuron view

The neuron view supports the following three models:
BERT: [Notebook] [Colab]
GPT-2 [Notebook] [Colab]
RoBERTa [Notebook]

Installation

pip install bertviz

You must also have Jupyter Notebook installed.

Execution

First start Jupyter Notebook:

jupyter notebook

Click New to start a Jupter notebook, then follow the instructions below.

Head view / model view

First load a Huggingface model, either a pre-trained model as shown below, or your own fine-tuned model. Be sure to set output_attention=True.

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased", output_attentions=True)
inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1]  # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0]) 

Then display the returned attention weights using the BertViz head_view or model_view function:

from bertviz import head_view
head_view(attention, tokens)

For more advanced use cases, e.g., specifying a two-sentence input to the model, please refer to the sample notebooks.

Neuron view

The neuron view is invoked differently than the head view or model view, due to requiring access to the model's query/key vectors, which are not returned through the Huggingface API. It is currently limited to BERT, GPT-2, and RoBERTa.

# Import specialized versions of models (that return query/key vectors)
from bertviz.transformers_neuron_view import BertModel, BertTokenizer

from bertviz.neuron_view import show

model = BertModel.from_pretrained(model_version, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
model_type = 'bert'
show(model, model_type, tokenizer, sentence_a, sentence_b, layer=2, head=0)

Running a sample notebook

git clone https://github.com/jessevig/bertviz.git
cd bertviz
jupyter notebook

Click on any of the sample notebooks. You can view a notebook's cached output visualizations by selecting File > Trust Notebook (and confirming in dialog) or you can run the notebook yourself. Note that the sample notebooks do not cover all Huggingface models, but the code should be similar for those not included.

Advanced options

Pre-selecting layer/head(s)

For the head view, you may pre-select a specific layer and collection of heads, e.g.:

head_view(attention, tokens, layer=2, heads=[3,5])

You may also pre-select a specific layer and single head for the neuron view.

Dark/light mode

The model view and neuron view support dark (default) and light modes. You may turn off dark mode in these views using the display_mode parameter:

model_view(attention, tokens, display_mode="light")

Non-huggingface models

The head_view and model_view functions may technically be used to visualize self-attention for any Transformer model, as long as the attention weights are available and follow the format specified in model_view and head_view (which is the format returned from Huggingface models). In some case, Tensorflow checkpoints may be loaded as Huggingface models as described in the Huggingface docs.

Limitations

Tool

  • The visualizations works best with shorter inputs (e.g. a single sentence) and may run slowly if the input text is very long, especially for the model view.
  • When running on Colab, some of the visualizations will fail (runtime disconnection) when the input text is long.
  • The neuron view only supports BERT, GPT-2, and RoBERTa models. This view needs access to the query and key vectors, which required modifying the model code (see transformers_neuron_view directory), which has only been done for these three models. Also, only one neuron view may be included per notebook.

Attention as "explanation"

Visualizing attention weights illuminates a particular mechanism within the model architecture but does not necessarily provide a direct explanation for model predictions. See [1], [2], [3].

Authors

Jesse Vig

Citation

When referencing BertViz, please cite this paper.

@inproceedings{vig-2019-multiscale,
    title = "A Multiscale Visualization of Attention in the Transformer Model",
    author = "Vig, Jesse",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-3007",
    doi = "10.18653/v1/P19-3007",
    pages = "37--42",
}

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

Acknowledgments

We are grateful to the authors of the following projects, which are incorporated into this repo:

Issues
  • How can use bertviz for Bert Questioning Answering??

    How can use bertviz for Bert Questioning Answering??

    Is there any way to see the attention visualization for Bert Questioning and Answering model ?? Because I couldn't see BertForQuestionAnswering class in bertviz.pytorch_transformers_attn? I have fine-tuned over a QA dataset using hugging-face transformers and wanted to see the visualization for it. Can you suggest any way of doing it ??

    opened by bvy007 25
  • encode_plus is not in GPT2 Tokenizer

    encode_plus is not in GPT2 Tokenizer

    It seems you removed encode_plus, what is the successor? All the notebook includes inputs = tokenizer.encode_plus(text, return_tensors='pt', add_special_tokens=True) which is wrong and raise an error.

    opened by mojivalipour 18
  • BertForSequenceClassification.from_pretrained

    BertForSequenceClassification.from_pretrained

    Hi, Thank you for this great work. can I use this code to plot my model(I am useing BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)

    model_type = 'bert' model_version = 'bert-base-uncased' do_lower_case = True model = model #(this my model) #tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case) sentence_a = sentences[0] sentence_b = sentences[1] call_html() show(model, model_type, tokenizer, sentence_a, sentence_b) I changed only the model with my model, and the sentences and I got this error??!!please help or share any blog that explain how to plot my model AttributeError: 'BertTokenizer' object has no attribute 'cls_token'

    Thank you in advance

    opened by alshahrani2030 16
  • layer and attention are empty.

    layer and attention are empty.

    I'm using colab but it doesn't work. Help.

    %%javascript require.config({ paths: { d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min', jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min', } });

    def` show_head_view(model, tokenizer, sentence_a, sentence_b=None):

    inputs = tokenizer.encode_plus(sentence_a, sentence_b, return_tensors='pt', 
    
    add_special_tokens=True)
    
    input_ids = inputs['input_ids']
    if sentence_b:
        token_type_ids = inputs['token_type_ids']
        attention = model(input_ids, token_type_ids=token_type_ids)[-1]
        sentence_b_start = token_type_ids[0].tolist().index(1)
    else:
        attention = model(input_ids)[-1]
        sentence_b_start = None
    input_id_list = input_ids[0].tolist() # Batch index 0
    tokens = tokenizer.convert_ids_to_tokens(input_id_list)    
    head_view(attention, tokens, sentence_b_start)
    

    model_version = 'bert-base-uncased' do_lower_case = True

    model = BertModel.from_pretrained(model_version, output_attentions=True) tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)

    sentence_a = "the cat sat on the mat" sentence_b = "the cat lay on the rug"

    show_head_view(model, tokenizer, sentence_a, sentence_b)

    capture

    opened by gogokre 13
  • Classification words importance

    Classification words importance

    Is there any way to use bertviz to visualise the importance of the different words respect to a given prediction of a classification task (BertClassifier)? Similar to this: https://docs.fast.ai/text.interpret.html#interpret

    Thank you

    enhancement 
    opened by lspataro 12
  • Neuron_view Asafaya pretrained model

    Neuron_view Asafaya pretrained model

    Hello,

    We appreciate your assistance with this helpful visualization for Bert. This issue occurs when I use the Asafaya pretrained model for the Arabic language, but not when I use the bert-base-multilingual-cased model.

    image

    Any suggestions!

    best,

    opened by hinnaweali 9
  • Cannot visualize enough input length on T5

    Cannot visualize enough input length on T5

    Hi,

    Thank you for this fascinating work.

    I tried to visualize T5 attentions on a high-Ram Colab Notebook with TPU. It runs perfectly when the input is short. However, when the input length is more than a few sentences, Colab notebook seems to crash. It's required in my research project that at most several paragraphs be visualized. Do you know if there is a way to make this work?

    Thank you! Yifu (Charles)

    opened by chen-yifu 8
  • No visualization on changing sentences

    No visualization on changing sentences

    If I am changing sentence then the visualization is coming. Why so?

    opened by Varchita-Beena 7
  • Issues in visualizing a fine tuned model

    Issues in visualizing a fine tuned model

    BertModel finetuned for a sequence classification task does not give expected results on visualisation. Ideally, the pretrained model should be loaded into BertForSequenceClassification, but that model does not return attentions scores for visualisation. When loaded into BertModel (0 to 11 layers), I assume the 11th layer (right before classification layer in BertForSequenceClassification) is the right layer to check attention distribution. But every word is equally attentive to every other word. I am wondering what can be the possible reasons and how I can fix it. Thanks. Screenshot 2019-07-30 at 11 19 46 AM

    opened by chikubee 7
  • GET nothing

    GET nothing

    Hi when I ran my model, it get noting gpt2 Do I have to use colab? Thanks

    opened by liygzting 6
  • support for huggingface wav2vec XLSR ?

    support for huggingface wav2vec XLSR ?

    Hey does bertviz support visualization for wav2vec XLSR ASR models from huggingface ? where one end is a spectrogram and the other is corresponding transcriptions with attention visualizations of the corresponding text that the audio

    opened by StephennFernandes 0
  • Support for Electra models?

    Support for Electra models?

    Would it be possible to extend bertviz to apply to electra models?

    opened by ltsc256 0
  • Unserstanding BART notebook

    Unserstanding BART notebook

    Hey

    I am trying to use Bertviz to debug my BART model (specifically your notebook https://github.com/jessevig/bertviz/blob/master/notebooks/head_view_bart.ipynb ). I wonder if you have any advice on how to visualize the model's focus. Specifically, I am interested how the produced output words are impacted by specific input words.

    Cross attention seems to mostly focus on <s> and '.' so I am not sure if I understand things correctly. Screen Shot 2021-11-16 at 08 43 58

    Thank you Eugene

    opened by ebagdasa 0
  • Failure to display head_view for large token lengths

    Failure to display head_view for large token lengths

    When the number of tokens increases, the time taken to load the js visuals for attention increases exponentially, causing failures to even load the javascript. I tested for the maximum 512 tokens starting from 10 tokens

    opened by sourav-fai 1
  • Support sequence classification (+ gradient visualization?)

    Support sequence classification (+ gradient visualization?)

    Hi,

    First of all thanks for this work, it's interesting, easy to use and doc is clear.

    I'm looking for ways to get insights on how an mBart model is behaving in a sequence classification task (see https://huggingface.co/transformers/model_doc/mbart.html#transformers.MBartForSequenceClassification).

    I plan to:

    1. first visualize attention, how could we modify this module to make it possible? (shouldn't be that difficult?)
    2. visualize gradients: i.e. given a text, a target and an expected target, compute the loss and visualize how each input words impacts the loss. Do you have any experience with such an approach? Do you think bertviz could be useful to visualize gradients instead of attention?

    Thanks!


    Edit: The problem with MBartForSequenceClassification is that the input sequence is passed in the encoder AND in the decoder. I would need the target class as the decoder target in order to evaluate attention between the target class and the input. Currently, the attention between decoder and encoder is somewhat similar to self-attention given they are the same sequences.

    opened by pltrdy 0
  • Support for BigBird

    Support for BigBird

    Hi, When I am trying to run this for the BigBird model which is trained on Genomic Data, I am getting the following error: ValueError: Attention has 1024 positions, while the number of tokens is 976 for tokens image

    opened by iamakshay1 2
  • Save attention visualizations as local html file

    Save attention visualizations as local html file

    I'm running the attention visualizations on a server without GUI. Is there an easy way to run, e.g., head_view_bert.pyand save the interactive visualizations to a local .html file which can then be viewed on another machine?

    enhancement 
    opened by Nikoschenk 3
  • How do I export vector graphics?

    How do I export vector graphics?

    After using bertviz to visualize attention in a notebook, how to export vector pictures that meet the requirements of the paper?

    enhancement 
    opened by kenjewu 4
  • Horizontal head view feature

    Horizontal head view feature

    Hi, thanks for the great visualization tool!

    I'm just wondering whether we can have a feature which renders head view in horizontal direction? The reason is that it's more suitable to show the sequence of tokens in the horizontal direction for language like Chinese, Japanese or Korean.

    image

    In the above example, typical sentences in Chinese take about 6,70 characters but it already uses a lot of space showing 10 of them in the current head view.

    Thanks again for the great tool!

    enhancement 
    opened by leemengtaiwan 1
  • Saving visualizations

    Saving visualizations

    Thanks for the great tool!

    It would be nice to be able to save the visualizations for specific layers/heads as images. I have not been able to find a spot in the model/head/neuron_view.js file to add a saving function.

    Do you maybe have a suggestion on how to save the visualizations as images?

    Thanks!

    enhancement 
    opened by e-tornike 5
Releases(v1.2.0)
  • v1.2.0(Jul 31, 2021)

    • Support displaying subset of layers / head to improve performance through include_layers parameter
    • Fix bug in model view where thumbnail didn't properly render if taller than detail view
    • Fix bug in neuron view where wasn't displaying all layers in some case
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(May 8, 2021)

Code for "High-Precision Model-Agnostic Explanations" paper

Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”

Marco Tulio Correia Ribeiro 674 Nov 24, 2021
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.4k Dec 3, 2021
A game theoretic approach to explain the output of any machine learning model.

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo

Scott Lundberg 14.8k Dec 2, 2021
L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,

Jianbo Chen 96 Nov 12, 2021
Python Library for Model Interpretation/Explanations

Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system

Oracle 1k Nov 17, 2021
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

null 1.1k Dec 2, 2021
JittorVis - Visual understanding of deep learning model.

JittorVis is a deep neural network computational graph visualization library based on Jittor.

null 205 Nov 25, 2021
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

null 38 Nov 8, 2021
A Practical Debugging Tool for Training Deep Neural Networks

Cockpit is a visual and statistical debugger specifically designed for deep learning!

null 31 Aug 13, 2021
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Webis 31 Nov 26, 2021
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 56 Nov 26, 2021
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.1k Dec 2, 2021
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 903 Feb 17, 2021
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

hellonlp 13 Nov 25, 2021
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

null 36 Nov 22, 2021
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 6.6k Dec 1, 2021
Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Ubiquitous Knowledge Processing Lab 4.2k Feb 18, 2021
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 16 Nov 8, 2021
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 693 Nov 24, 2021
Transformer related optimization, including BERT, GPT

This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA.

NVIDIA Corporation 599 Nov 28, 2021
Ongoing research training transformer language models at scale, including: BERT & GPT-2

What is this fork of Megatron-LM and Megatron-DeepSpeed This is a detached fork of https://github.com/microsoft/Megatron-DeepSpeed, which in itself is

BigScience Workshop 62 Nov 25, 2021
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 2.5k Dec 2, 2021
天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

zxx飞翔的鱼 497 Nov 30, 2021
Implementation of paper Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa.

RoBERTaABSA This repo contains the code for NAACL 2021 paper titled Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoB

null 66 Dec 2, 2021
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

null 703 Dec 2, 2021
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 243 Dec 4, 2021
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 35 Nov 6, 2021
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 35 Nov 6, 2021
Albert launcher extension for rolling dice.

dice-roll-albert-ext Extension for rolling dice in Albert launcher Installation Locate the modules directory in the Python extension data directory. T

Jonah Lawrence 1 Nov 18, 2021