24 Repositories
Python fairness-ai Libraries
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Responsible AI Workshop Responsible innovation is top of mind. As such, the tech industry as well as a growing number of organizations of all kinds in
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm
FairLens is an open source Python library for automatically discovering bias and measuring fairness in data
FairLens FairLens is an open source Python library for automatically discovering bias and measuring fairness in data. The package can be used to quick
FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing
FairEdit Relevent Publication FairEdit: Preserving Fairness in Graph Neural Networks through Greedy Graph Editing
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.
Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a
codes for Self-paced Deep Regression Forests with Consideration on Ranking Fairness
Self-paced Deep Regression Forests with Consideration on Ranking Fairness This is official codes for paper Self-paced Deep Regression Forests with Con
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
AI Fairness 360 (AIF360) The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to h
Responsible Machine Learning with Python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
moDel Agnostic Language for Exploration and eXplanation
moDel Agnostic Language for Exploration and eXplanation Overview Unverified black box model is the path to the failure. Opaqueness leads to distrust.
A Python package to facilitate research on building and evaluating automated scoring models.
Rater Scoring Modeling Tool Introduction Automated scoring of written and spoken test responses is a growing field in educational natural language pro
Source Code for ICSE 2022 Paper - ``Can We Achieve Fairness Using Semi-Supervised Learning?''
Fair-SSL Source Code for ICSE 2022 Paper - Can We Achieve Fairness Using Semi-Supervised Learning? Ethical bias in machine learning models has become
Code for paper: An Effective, Robust and Fairness-awareHate Speech Detection Framework
BiQQLSTM_HS Code and data for paper: Title: An Effective, Robust and Fairness-awareHate Speech Detection Framework. Authors: Guanyi Mou and Kyumin Lee
Fairness Metrics: All you need to know
Fairness Metrics: All you need to know Testing machine learning software for ethical bias has become a pressing current concern. Recent research has p
Learning Fair Representations for Recommendation: A Graph-based Perspective, WWW2021
FairGo WWW2021 Learning Fair Representations for Recommendation: A Graph-based Perspective As a key application of artificial intelligence, recommende
A PyTorch implementation of "Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information" (WSDM 2021)
FairGNN A PyTorch implementation of "Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information" (
Code for ICML2019 Paper "Compositional Invariance Constraints for Graph Embeddings"
Dependencies NOTE: This code has been updated, if you were using this repo earlier and experienced issues that was due to an outaded codebase. Please
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
Towards Understanding and Mitigating Social Biases in Language Models This repo contains code and data for evaluating and mitigating bias from generat
Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness
Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness This repository contains the code used for the exper
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this
Tilted Empirical Risk Minimization (ICLR '21)
Tilted Empirical Risk Minimization This repository contains the implementation for the paper Tilted Empirical Risk Minimization ICLR 2021 Empirical ri
ICLR 2021, Fair Mixup: Fairness via Interpolation
Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti
A library that implements fairness-aware machine learning algorithms
Themis ML themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. Fairness-aware M
FairML - is a python toolbox auditing the machine learning models for bias.
======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive
Bias and Fairness Audit Toolkit
The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers