Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Overview



PyPI Package latest release Supported versions

Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Ecco provides multiple interfaces to aid the explanation and intuition of Transformer-based language models. Read: Interfaces for Explaining Transformer Language Models.

Ecco runs inside Jupyter notebooks. It is built on top of pytorch and transformers.

Ecco is not concerned with training or fine-tuning models. Only exploring and understanding existing pre-trained models. The library is currently an alpha release of a research project. You're welcome to contribute to make it better!

Documentation: ecco.readthedocs.io

Features

  • Support for a wide variety of language models (GPT2, BERT, RoBERTA, T5, T0, and others).
  • Ability to add your own local models (if they're based on Hugging Face pytorch models).
  • Feature attribution (IntegratedGradients, Saliency, InputXGradient, DeepLift, DeepLiftShap, GuidedBackprop, GuidedGradCam, Deconvolution, and LRP via Captum)
  • Capture neuron activations in the FFNN layer in the Transformer block
  • Identify and visualize neuron activation patterns (via Non-negative Matrix Factorization)
  • Examine neuron activations via comparisons of activations spaces using SVCCA, PWCCA, and CKA
  • Visualizations for:
    • Evolution of processing a token through the layers of the model (Logit lens)
    • Candidate output tokens and their probabilities (at each layer in the model)

Examples:

What is the sentiment of this film review?

Use a large language model (T5 in this case) to detect text sentiment. In addition to the sentiment, see the tokens the model broke the text into (which can help debug some edge cases).

Which words in this review lead the model to classify its sentiment as "negative"?

Feature attribution using Integrated Gradients helps you explore model decisions. In this case, switching "weakness" to "inclination" allows the model to correctly switch the prediction to positive.

Explore the world knowledge of GPT models by posing fill-in-the blank questions.

Asking GPT2 where heathrow airport is

Does GPT2 know where Heathrow Airport is? Yes. It does.

What other cities/words did the model consider in addition to London?

The model also considered Birmingham and Manchester

Visualize the candidate output tokens and their probability scores.

Which input words lead it to think of London?

Asking GPT2 where heathrow airport is

At which layers did the model gather confidence that London is the right answer?

The order of the token in each layer, layer 11 makes it number 1

The model chose London by making the highest probability token (ranking it #1) after the last layer in the model. How much did each layer contribute to increasing the ranking of London? This is a logit lens visualizations that helps explore the activity of different model layers.

What are the patterns in BERT neuron activation when it processes a piece of text?

Colored line graphs on the left, a piece of text on the right. The line graphs indicate the activation of BERT neuron groups in response to the text

A group of neurons in BERT tend to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.

Read the paper:

Ecco: An Open Source Library for the Explainability of Transformer Language Models Association for Computational Linguistics (ACL) System Demonstrations, 2021

Tutorials

How-to Guides

API Reference

The API reference and the architecture page explain Ecco's components and how they work together.

Gallery & Examples

Predicted Tokens: View the model's prediction for the next token (with probability scores). See how the predictions evolved through the model's layers. [Notebook] [Colab]


Rankings across layers: After the model picks an output token, Look back at how each layer ranked that token. [Notebook] [Colab]


Layer Predictions:Compare the rankings of multiple tokens as candidates for a certain position in the sequence. [Notebook] [Colab]


Primary Attributions: How much did each input token contribute to producing the output token? [Notebook] [Colab]


Detailed Primary Attributions: See more precise input attributions values using the detailed view. [Notebook] [Colab]


Neuron Activation Analysis: Examine underlying patterns in neuron activations using non-negative matrix factorization. [Notebook] [Colab]

Getting Help

Having trouble?

  • The Discussion board might have some relevant information. If not, you can post your questions there.
  • Report bugs at Ecco's issue tracker

Bibtex for citations:

@inproceedings{alammar-2021-ecco,
    title = "Ecco: An Open Source Library for the Explainability of Transformer Language Models",
    author = "Alammar, J",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
    year = "2021",
    publisher = "Association for Computational Linguistics",
}
Comments
  • Support for T5-like Seq2SeqLM

    Support for T5-like Seq2SeqLM

    Hello, I was wondering if there are any plans for explicit encoder-decoder models like T5. Although T5 was not pre-trained with auto-regressive LM objective it is a pretty good candidate for ecco's generate method. I tried running t5 as it was listed in model-config.yaml but soon ran into issues because the current implementation is very much suited to gpt like models.

    I made some changes on a fork to get attribution working, but not sure if I did it correctly https://colab.research.google.com/drive/1zahIWgOCySoQXQkAaEAORZ5DID11qpkH?usp=sharing https://github.com/chiragjn/ecco/tree/t5_exp

    I would love to contribute to add support with some help, especially on the overall implementation design

    opened by chiragjn 8
  • Adds a model config field use_causal_lm and config entries for gpt-neo

    Adds a model config field use_causal_lm and config entries for gpt-neo

    Adding gpt-neo models to model-config.yaml failed because the model needs to be loaded using AutoModelForCausalLM, but init identified such models by looking for gpt2 in the name. A TODO comment in init mentioned using config instead. I refactored config loading slightly to enable this - not sure if that is the direction you intended or not.

    opened by stprior 8
  • Add a `conda` install option for `ecco`

    Add a `conda` install option for `ecco`

    A conda install option for ecco could be helpful for two reasons:

    1. Easy installation with version management with conda.
    2. For other libraries, which if depend on ecco, if you want them on conda-forge channel as well, ecco must be available on conda-forge.

    :bulb: I have already have started work on this. PR: https://github.com/conda-forge/staged-recipes/pull/17388

    Once, the PR gets merged, you will be able to install ecco as:

    conda install -c conda-forge ecco
    

    I will send a PR to update your documentation, once the PR gets merged.

    opened by sugatoray 7
  • Add support for PEGASUS model

    Add support for PEGASUS model

    I would like to add the support of PEGASUS in model-config.yaml.

    PEGASUS model is an encoder-decoder type and the implementation is completely inherited from BartForConditionalGeneration. So the config is similar to the BART model.

    Notes: This is my first time making a pull request on an open-source project, but hope this helps!

    opened by thomas-chong 6
  • Add support for Integrated Gradients explainability method

    Add support for Integrated Gradients explainability method

    In this PR, me and @SSamDav add support for the IG algorithm and make use of the same visualization plots used for input saliency. Besides, we also fix a saliency visualization bug for enc-dec models that was not addressed in the previous PR.

    Notes:

    • The generate method became even slower with the IG method. We added an option to choose which attribution method to calculate, but it can be further improved. Maybe the visualization could be coupled with the generation itself.
    • The IG score has a convergence delta error that could be shown in the plot or, for example, be used to change the IG default parameters when a minimum error is not met.
    opened by JoaoLages 5
  • attention head

    attention head

    Hi @jalammar, I tested some examples with Ecco, and I wanted to know if it is possible to change the head to view the activations for each one and for each layer?

    opened by afcarvallo 5
  • Add support for more attribution methods

    Add support for more attribution methods

    Hi, Currently, the project seems to be relying on grad-norm and grad-x-input to obtain the attributions. However, there are other arguably better (as discussed in recent work) methods to obtain saliency maps. Integrating them in this project would also provide a good way to compare them on the same input examples.

    Some of these methods from the top of my head are- integrated gradients, gradient shapley, and LIME. Perhaps support for visualizing the attention map from the model being interpreted itself could also be added. Methods based on feature ablation are also possible but they might need more work to integrate.

    There is support for these aforementioned methods on Captum, but it takes effort to get them working for NLP tasks, especially those based on language modeling. Thus, I feel this would be a useful addition here.

    enhancement help wanted 
    opened by RachitBansal 5
  • token prefix in roberta model?

    token prefix in roberta model?

    Trying to use a custom trained Roberta model by loading the config file but getting the error the token prefix is not present in the config. Any idea how to fix it? Screenshot 2022-02-02 at 3 30 31 PM

    opened by sarthusarth 4
  • output.saliency() displays nothing

    output.saliency() displays nothing

    I am trying to visualize saliency maps from a custom GPT model. Since I am concerned only about saliency maps, I just do the following:

    out = OutputSeq(token_ids = input_ids, n_input_tokens = n_input_tokens, tokens = tokens, attribution = attr)
    out.saliency()
    

    I get no errors and nothing is displayed in the jupyter notebook, but when I open Chrome's Javascript console, I see the following thing.

    
    (unknown) Ecco initialize.
    
      | l | @ | storage.googleapis.c…ust=1610606118793:1
    -- | -- | -- | --
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | autoTextColor | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | each | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | style | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | enter | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | join | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | setupTokenBoxes | @ | storage.googleapis.c…ust=1610606118793:1
      | init | @ | storage.googleapis.c…ust=1610606118793:1
      | eval
      | execCb | @ | require.js:1693
      | check | @ | require.js:881
      | enable | @ | require.js:1173
      | init | @ | require.js:786
      | (anonymous) | @ | require.js:1457
    
    DevTools failed to load SourceMap: Could not load content for http://localhost:8888/static/notebook/js/main.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    DevTools failed to load SourceMap: Could not load content for https://storage.googleapis.com/wandb-cdn/production/d4e2434e6/raven.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    

    How do I resolve this issue? Btw, I am running this notebook by sshing into my institute's remote machine.

    opened by VirajBagal 4
  • Tell pip to install from setup.py

    Tell pip to install from setup.py

    Forces pip install -r requirements.txt to install the same package versions specified in setup.py.

    For details, see this comment.

    Confirmed that tests pass locally after merging this and #13 . (Since #13 fixes tests, they won't pass until it is merged.)

    opened by nostalgebraist 4
  • Memory management and tweaks

    Memory management and tweaks

    Hello Jay, thanks for all your work on GPT interpretation!

    This PR contains changes I made in a personal fork while attempting to use ecco with a 1.5B-size GPT-2 model. There are 3 kinds of changes:

    1. Attempts to plug memory leaks / otherwise reduce memory footprint
    2. Bug fixes
    3. Usability tweaks and new features

    In retrospect, I wish I had made distinct branches for these 3 types of change, as together they now make up a pretty large PR. I can still go back and do that, if (say) you want to merge the bug fixes without the other ones.


    Context: I am using ecco on a 1.5B-size GPT-2 model, using a Tesla T4 GPU (~15GB memory) on Colab.

    I am using version 3.4.0 of transformers, which is the max version consistent with ecco's setup.py and hence the one I got on installation.

    1. Memory management

    Running lm.generate with this large model, I ran out of GPU memory. This surprised me, because memory has not been an issue for me using the same model in tensorflow.

    After looking into it, I found a few places where use of GPU memory could be lowered:

    • past, which we don't use here, was still being computed on each step.
      • More importantly, python garbage collection was not (as far as I could tell) freeing the values of past produced on previous steps, so generating N tokens required enough memory to store the N pasts emitted from steps 1, 2, ..., N.
      • Mitigation: pass use_cache=False to the model's forward pass, so it doesn't return pasts
    • Saliency calculations all used retain_graph=True, so the backward graphs were never cleared.
      • Mitigation: when we do several gradient calculations per step, pass retain_graph=False to the last one
    • hidden_states were stored on the GPU during generation.
      • They don't need to be on the GPU at that time (because they aren't used in generation).
      • And, since we have a low CPU memory footprint otherwise, we have plenty of CPU memory to store them in.
      • Mitigation: call .cpu() on hidden states emitted from each step. If we want to calculate with them later on, move them back to self.device.
    • (Minor) Memory allocated for logit matrices from each step was not freed after sampling
      • Mitigation: output['logits']=None after rolling a sample

    With these changes, I can run lm.generate for many 100s of steps, where previously I could only manage a small number, maybe ~10.


    2. Bug fixes

    • activations_dict_to_array would fail in the edge case where we only have a single token in the prompt.
      • Issue: np.squeeze would wrongly eliminate the position axis (because its size was 1).
      • Mitigation: use np.concatenate, which doesn't add an unwanted singleton dimension, so we don't have to squeeze
    • top-p sampling did not work
      • Issue: top_k_top_p_filtering apparently expects a position axis in its input, even if that axis only has length 1
      • Mitigation: replace [-1, :] with [-1: ,:] and then squeeze after rolling a sample

    3. Usability tweaks and new features

    • Added an option to not track hidden_states. This feels consistent with the way you can choose whether or not to track other things (activations, attn).
      • To help this work properly, switched from position-based indexing into the CausalLMOutputWithPast objects to key lookup, so we're robust to changes in the length/order of these objects.
    • Added the option to only track hidden states for a user-defined subset of layers, through the new kwarg collect_activations_layer_nums.
      • This is valuable with a large model where you may be only interested in a specific layer, and storing activations from all layers has high memory cost.
      • NMF now takes this kwarg and (if not None) uses it to map between row indices in activations and actual layer numbers. For example, if we are tracking layers 7 and 23, we will have an activation matrix with 2 rows. If passed from_layer=7, to_layer=8, we should retrieve the row slice [:1, :], not [7:8, :].

    I realize this PR is unwieldy -- I just wanted to get my changes up in some form, since at least some of them seemed unambiguously helpful (bug fixes).

    Let me know if you want me to break it down into smaller pieces, or if it needs other work, or if it is generally unhelpful for your goals, or whatever.

    Did not run tox tests because I could not get them to run properly on my machine, even after downloading the tox.ini from one of the CI-related branches.

    opened by nostalgebraist 4
  • AttributeError: 'OutputSeq' object has no attribute 'saliency'

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    captum 0.5.0 torch 1.13.0+cu117

    Language_Models_and_Ecco_PyData_Khobar.ipynb

    text= "The countries of the European Union are:\n1. Austria\n2. Belgium\n3. Bulgaria\n4."
    output_3 = lm.generate(text, generate=20, do_sample=True)
    output_3.saliency()
    

    AttributeError Traceback (most recent call last) Cell In [13], line 1 ----> 1 output_3.saliency()

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    opened by Claus1 1
  • Rankings_watch displaying wrong sequence

    Rankings_watch displaying wrong sequence

    Hello, I have a problem with the rankings_watch() function. I used a predefined GPT2 model and gave it the input "Today, the weather is". However, in the visualization, only the first token is shown although the model creates the output correctly: image

    Thank you for your help :D

    bug 
    opened by MiriUll 1
  • Running Eccomap for Pre Trained BertForMaskedLM

    Running Eccomap for Pre Trained BertForMaskedLM

    Hi, I was trying to run my pretrained model for which i had used BERTForMaskedLM model class from hugging face but its giving me this error. Plese help me in resolving this error. Thanks in advance. image

    opened by iamakshay1 1
  • Remove `tokenizer_config` usage from the library

    Remove `tokenizer_config` usage from the library

    This config parameter was made to easily package config to send to the Javascript components. Ecco now handles all tokenization on the Python side to separate the concerns between the python and JS components. Subsequently, this needs to be removed.

    opened by jalammar 0
  • Tokenizer has partial token suffix instead of prefix

    Tokenizer has partial token suffix instead of prefix

    Following your guide for identifying model configuration

    MODEL_ID = "vinai/bertweet-base"
    
    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, normalization=True, use_fast=False)
    
    ids= tokenizer('tokenization')
    ids
    

    returns:

    {'input_ids': [0, 969, 6186, 6680, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
    

    Then

    tokenizer.convert_ids_to_tokens(ids['input_ids'])
    

    returns:

    ['<s>', 'to@@', 'ken@@', 'ization', '</s>']
    

    Here I noticed that the tokenizer adds a partial token suffix instead of partial token prefix. Having a suffix instead of prefix is not configurable in the config.

    opened by guustfranssensEY 1
Releases(v0.1.2)
Owner
Jay Alammar
ML Research Engineer. Focused on NLP language models and visualization. @cohere-ai. Ex ML content dev @ Udacity.
Jay Alammar
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Samuel Sharkey 1 Feb 7, 2022
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing

PORORO: Platform Of neuRal mOdels for natuRal language prOcessing pororo performs Natural Language Processing and Speech-related tasks. It is easy to

Kakao Brain 1.2k Dec 21, 2022
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ Getting started Prerequ

Cambridge Quantum 315 Jan 1, 2023
Python library for Serbian Natural language processing (NLP)

SrbAI - Python biblioteka za procesiranje srpskog jezika SrbAI je projekat prikupljanja algoritama i modela za procesiranje srpskog jezika u jedinstve

Serbian AI Society 3 Nov 22, 2022
🗣️ NALP is a library that covers Natural Adversarial Language Processing.

NALP: Natural Adversarial Language Processing Welcome to NALP. Have you ever wanted to create natural text from raw sources? If yes, NALP is for you!

Gustavo Rosa 21 Aug 12, 2022
Natural Language Processing library built with AllenNLP 🌲🌱

Custom Natural Language Processing with big and small models ????

Recognai 65 Sep 13, 2022
This repository contains all the source code that is needed for the project : An Efficient Pipeline For Bloom’s Taxonomy Using Natural Language Processing and Deep Learning

Pipeline For NLP with Bloom's Taxonomy Using Improved Question Classification and Question Generation using Deep Learning This repository contains all

Rohan Mathur 9 Jul 17, 2021
We have built a Voice based Personal Assistant for people to access files hands free in their device using natural language processing.

Voice Based Personal Assistant We have built a Voice based Personal Assistant for people to access files hands free in their device using natural lang

Rushabh 2 Nov 13, 2021
Implementation of Natural Language Code Search in the project CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT-Implementation In this repo we have replicated the paper CodeBERT: A Pre-Trained Model for Programming and Natural Languages. We are interest

Tanuj Sur 4 Jul 1, 2022
This library is testing the ethics of language models by using natural adversarial texts.

prompt2slip This library is testing the ethics of language models by using natural adversarial texts. This tool allows for short and simple code and v

null 9 Dec 28, 2021
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Dec 30, 2022