This package implements the algorithms introduced in Smucler, Sapienza, and Rotnitzky (2020) to compute optimal adjustment sets in causal graphical models.

Overview

optimaladj: A library for computing optimal adjustment sets in causal graphical models

This package implements the algorithms introduced in Smucler, Sapienza and Rotnitzky (2021) and Smucler and Rotnitzky (2022) to compute optimal adjustment sets in causal graphical models. The package provides a class, called CasualGraph, that inherits from networkx's DiGraph class and has methods to compute: the optimal, optimal minimal, optimal minimum cardinality and optimal minimum cost adjustment sets for individualized treatment rules (point exposure dynamic treatment regimes) in non-parametric causal graphical models with latent variables.

Suppose we are given a causal graph G specifying:

  • a treatment variable A,
  • an outcome variable Y,
  • a set of observable (that is, non-latent) variables N,
  • a set of observable variables that will be used to allocate treatment L, and possibly
  • positive costs associated with each observable variable.

Suppose moreover that there exists at least one adjustment set with respect to A and Y in G that is comprised of observable variables.

An optimal adjustment set is an observable adjustment set that yields the non-parametric estimator of the interventional mean with the smallest asymptotic variance among those that are based on observable adjustment sets.

An optimal minimal adjustment set is an observable adjustment set that yields the non-parametric estimator of the interventional mean with the smallest asymptotic variance among those that are based on observable minimal adjustment sets. An observable minimal adjustment set is a valid adjustment set such that all its variables are observable and the removal of any variable from it destroys validity.

An optimal minimum cardinality adjustment set is an observable adjustment set that has minimum possible cardinality and yields the non-parametric estimator of the interventional mean with the smallest asymptotic variance among those that are based on observable minimum cardinality adjustment sets.

An optimal minimum cost adjustment set is defined similarly, being optimal in the class of observable adjustment sets that have minimum possible cost.

Under these assumptions, Smucler, Sapienza and Rotnitzky (2020) and Smucler and Rotnitzky (2022) show that optimal minimal, optimal minimum cardinality and optimal minimum cost adjustment sets always exist, and can be computed in polynomial time. They also provide a sufficient criterion for the existance of an optimal adjustment set and a polynomial time algorithm to compute it when it exists.

Check out our notebook with examples.

Installation

You can install the stable version of the package from PyPI by running

pip install optimaladj

You can install the development version of the package from Github by running

pip install git+https://github.com/facusapienza21/optimaladj.git#egg=optimaladj
You might also like...
Square Root Bundle Adjustment for Large-Scale Reconstruction
Square Root Bundle Adjustment for Large-Scale Reconstruction

RootBA: Square Root Bundle Adjustment Project Page | Paper | Poster | Video | Code Table of Contents Citation Dependencies Installing dependencies on

Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight)

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization This codebase is the official implementation of Test-Time Classifier A

Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

Utility tools for the
Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper

Divide and Remaster Utility Tools Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper The DnR d

This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper
Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper

LEXA Benchmark Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper (Discovering and Achieving Goals via World Models

Comments
  • Bug regarding name handling

    Bug regarding name handling

    Sara Taheri pointed out the package was giving the wrong answer in the following simple example:

    treatment = 'T'
    outcome = 'Y'
    
    L = []
    N = ['T', 'Z1', 'Z2', 'Z3', 'M1', 'M2', 'Y']
    
    G = CausalGraph()
    G.add_edges_from([('Z1', 'Z2'),
                      ('Z1', 'T'),
                      ('Z2', 'Z3'),
                      ('Z3', 'Y'),
                      ('T', 'M1'),
                      ('M1','M2'),
                      ('M2', 'Y')])
    
    G.ignore(treatment, outcome, L, N)
    

    The output should be M1, M2, and it is only M2. This is due to a bug in the method that constructs the forbidden set, specifically here. The problem is that when node has more than one character, each separate character is added to the forbidden set, instead of the whole string.

    This PR fixes this and adds a test.

    opened by esmucler 0
Owner
Facundo Sapienza
PhD Student at UC Berkeley interested in Machine Learning and Physics. Previously studied Physics and Mathematics in the University of Buenos Aires
Facundo Sapienza
An implementation of "Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport"

Optex An implementation of Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport for TU Delft CS4240. You c

Hans Brouwer 33 Jan 5, 2023
Poplar implementation of "Bundle Adjustment on a Graph Processor" (CVPR 2020)

Poplar Implementation of Bundle Adjustment using Gaussian Belief Propagation on Graphcore's IPU Implementation of CVPR 2020 paper: Bundle Adjustment o

Joe Ortiz 34 Dec 5, 2022
JudeasRx - graphical app for doing personalized causal medicine using the methods invented by Judea Pearl et al.

JudeasRX Instructions Read the references given in the Theory and Notation section below Fire up the Jupyter Notebook judeas-rx.ipynb The notebook dra

Robert R. Tucci 19 Nov 7, 2022
Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in 3D.

ApproxMVBB Status Build UnitTests Homepage Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in

Gabriel Nützi 390 Dec 31, 2022
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

null 1 Oct 14, 2021
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Yutian Liu 2 Jan 29, 2022
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Google 3.2k Dec 31, 2022
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 7, 2023
Rotation-Only Bundle Adjustment

ROBA: Rotation-Only Bundle Adjustment Paper, Video, Poster, Presentation, Supplementary Material In this repository, we provide the implementation of

Seong 51 Nov 29, 2022