L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

Overview

L2X

Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan.

Dependencies

The code for L2X runs with Python and requires Tensorflow of version 1.2.1 or higher and Keras of version 2.0 or higher. Please pip install the following packages:

  • numpy
  • tensorflow
  • keras
  • pandas
  • nltk

Or you may run the following and in shell to install the required packages:

git clone https://github.com/Jianbo-Lab/L2X
cd L2X
sudo pip install -r requirements.txt

See README.md in respective folders for details.

Citation

If you use this code for your research, please cite our paper:

@arxiv{chen2018learning,
title = {Learning to Explain: An Information-Theoretic Perspective on Model Interpretation},
author = {Chen, Jianbo and Song, Le and Wainwright, Martin J and Jordan, Michael I}, 
journal={arXiv preprint arXiv:1802.07814}, 
year = {2018}  
}
You might also like...
Net2Vis automatically generates abstract visualizations for convolutional neural networks from Keras code. Code for visualizing the loss landscape of neural nets
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

A data-driven approach to quantify the value of classifiers in a machine learning ensemble.
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Algorithms for monitoring and explaining machine learning models
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

A library for debugging/inspecting machine learning classifiers and explaining their predictions
A library for debugging/inspecting machine learning classifiers and explaining their predictions

ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m

Lime: Explaining the predictions of any machine learning classifier
Lime: Explaining the predictions of any machine learning classifier

lime This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predict

FairML - is a python toolbox auditing the machine learning models for bias.
FairML - is a python toolbox auditing the machine learning models for bias.

======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive

A library that implements fairness-aware machine learning algorithms

Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M

Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase

Comments
  • variational lower bound

    variational lower bound

    It seems that this question is not relevant to the codes, but may I ask why the lower bound of the conditional expectation of log Pm(Y | x) is the conditional expectation of log Qs(Y | x)? I think applying the Jensen’s inequality yields the lower bound to a function of Pm. I can't find any relationship between Pm and Qs.

    opened by yalunar 2
  • L2X for hierarchical LSTM returns always first sentence as selection

    L2X for hierarchical LSTM returns always first sentence as selection

    Hello and congratulations for your great work. I tried to implement the L2X explanations with hierarchical LSTM on a fake news dataset, as shown in the imdb_sent example. However, the explainer always selects the first sentence as an explanation. Do you have any idea why is this happening? I would also like to ask you (sorry if it is a stupid question), as the method is model agnostic, why do you use different neural network architectures for each imdb example?

    opened by ikonstas-ds 1
  • Add note about l2x_synthetic package

    Add note about l2x_synthetic package

    Hi! I extracted the synthetic dataset generation code and bundled it as an easy-to-install package, called l2x_synthetic.

    If you like, we could add a small link to the package in the README file; it can perhaps come useful to some people. Cheers! ☀️

    opened by dunnkers 0
  • Post-hoc accuracy for IMDB-word

    Post-hoc accuracy for IMDB-word

    When running the experiments IMDB-word I tend to get significantly lower post-hoc accuracy than the one listed in Table 4 in the paper (0.908). Could you tell me if I might have to change some hyperparameters etc.?

    Thanks!

    opened by mniepert 0
Owner
Jianbo Chen
Quantitative researcher at Citadel Securities; PhD in Statistics at UC Berkeley working with Michael I. Jordan and Martin J. Wainwright.
Jianbo Chen
Python Library for Model Interpretation/Explanations

Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system

Oracle 1k Dec 27, 2022
Code for "High-Precision Model-Agnostic Explanations" paper

Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”

Marco Tulio Correia Ribeiro 735 Jan 5, 2023
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.9k Dec 30, 2022
JittorVis - Visual understanding of deep learning model.

JittorVis is a deep neural network computational graph visualization library based on Jittor.

null 182 Jan 6, 2023
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

null 1.2k Dec 26, 2022
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Jesse Vig 4.7k Jan 1, 2023
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

null 56 Jan 3, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Keras, Caffe, Darknet, ncnn,

Lutz Roeder 16.3k Sep 27, 2021
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) ?? Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 3, 2023