44 Repositories
Python counterfactual-explanations Libraries
📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
Explainable CNNs 📦 Flexible visualization package for generating layer-wise explanations for CNNs. It is a common notion that a Deep Learning model i
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation
Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin
This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations"
Robust Counterfactual Explanations This repository comes with the paper "On the Robustness of Counterfactual Explanations to Adverse Perturbations". I
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN
Interpretable Control Exploration and Counterfactual Explanation (ICE) on StyleGAN Which Style Makes Me Attractive? Interpretable Control Discovery an
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset
COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced
A collection of resources, problems, explanations and concepts that are/were important during my Data Science journey
Data Science Gurukul List of resources, interview questions, concepts I use for my Data Science work. Topics: Basics of Programming with Python + Unde
Create a table with row explanations, column headers, using matplotlib
Create a table with row explanations, column headers, using matplotlib. Intended usage was a small table containing a custom heatmap.
These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"
Few-shot-NLEs These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations". You can find the smal
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations Project Page | Paper This repo provides the code for HIVE, a human evaluation frame
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
AI Fairness 360 (AIF360) The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to h
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue. How do I cite D-REX? For now, cite
Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"
Generating High-Quality Explanations for Navigation in Partially-Revealed Environments This work presents an approach to explainable navigation under
A Python package for causal inference using Synthetic Controls
Synthetic Control Methods A Python package for causal inference using synthetic controls This Python package implements a class of approaches to estim
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.
counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines Understanding the results of deep neural networks is
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.
counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code
LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations
LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on
moDel Agnostic Language for Exploration and eXplanation
moDel Agnostic Language for Exploration and eXplanation Overview Unverified black box model is the path to the failure. Opaqueness leads to distrust.
ACV is a python library that provides explanations for any machine learning model or data.
ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based models.
The official PyTorch code implementation of "Human Trajectory Prediction via Counterfactual Analysis" in ICCV 2021.
Human Trajectory Prediction via Counterfactual Analysis (CausalHTP) The official PyTorch code implementation of "Human Trajectory Prediction via Count
This repository contains the code used to quantitatively evaluate counterfactual examples in the associated paper.
On Quantitative Evaluations of Counterfactuals Install To install required packages with conda, run the following command: conda env create -f requi
Explaining neural decisions contrastively to alternative decisions.
Contrastive Explanations for Model Interpretability This is the repository for the paper "Contrastive Explanations for Model Interpretability", about
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"
The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries
VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme
SimplEx - Explaining Latent Representations with a Corpus of Examples
SimplEx - Explaining Latent Representations with a Corpus of Examples Code Author: Jonathan Crabbé ([email protected]) This repository contains the imp
Amazon Multilingual Counterfactual Dataset (AMCD)
Amazon Multilingual Counterfactual Dataset (AMCD)
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,
labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visu
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification
Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification
Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV
All Tools In One is a Script Developed with Python3. It gathers a total of 14 Discord tools (including a RAT, a Raid Tool, a Token Grabber, a Crash Video Maker, etc). It has a pleasant and intuitive interface to facilitate the use of all with help and explanations for each of them.
[Discord] - All Tools In One [Discord] - All Tools In One is a Script Gathering for Windows systems written in Python. Disclaimer This project was cre
Collection of NLP model explanations and accompanying analysis tools
Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
CARLA - Counterfactual And Recourse Library CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation The paper: https://arxiv.org/abs/1704.03296 What makes
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models Authors: Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart Abstract: Understan
Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Group-CAM By Zhang, Qinglong and Rao, Lu and Yang, Yubin [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the o
audioLIME: Listenable Explanations Using Source Separation
audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C
Python Library for Model Interpretation/Explanations
Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Contrastive Explanation (Foil Trees) Contrastive and counterfactual explanations for machine learning (ML) Marcel Robeer (2018-2020), TNO/Utrecht Univ
Code for "High-Precision Model-Agnostic Explanations" paper
Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”
Algorithms for monitoring and explaining machine learning models
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition
Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S
[ICLR'21] Counterfactual Generative Networks
This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual images, you can try out the Colab below.