Code for "Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance" at NeurIPS 2021

Overview

Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance

Justin Lim, Christina X Ji, Michael Oberst, Saul Blecker, Leora Horwitz, and David Sontag. 2021. Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance. In Thirty-fifth Conference on Neural Information Processing Systems.

Individuals often make different decisions when faced with the same context, due to personal preferences and background. For instance, judges may vary in their leniency towards certain drug-related offenses, and doctors may vary in their preference for how to start treatment for certain types of patients. With these examples in mind, we present an algorithm for identifying types of contexts (e.g., types of cases or patients) with high inter-decision-maker disagreement. We formalize this as a causal inference problem, seeking a region where the assignment of decision-maker has a large causal effect on the decision. We give an iterative algorithm to find a region maximizing this objective and give a generalization bound for its performance. In a semi-synthetic experiment, we show that our algorithm recovers the correct region of disagreement accurately compared to baselines. Finally, we apply our algorithm to real-world healthcare datasets, recovering variation that aligns with existing clinical knowledge.

To run our algorithm, see run_semisynth_exp_recover_beta.ipynb for how to call IterativeRegionEstimator.py. The baselines and our model are also implemented in baselines.py. Helper functions (e.g. for evaluation) are in helpers.py.

Please refer to the following steps to reproduce the experiments and figures in this paper:

  1. To set-up the required packages, run create_env.sh, passing in a conda environment name. Then run source activate with the environment name to enter it.

  2. To run the semi-synthetic experiment,

    1. Download the criminal justice dataset from https://github.com/stanford-policylab/recidivism-predictions
    2. Process the data using data_processing/semisynth_process_data.ipynb.
    3. To run the iterative algorithm and baselines, run python3 run_baselines_on_semisynth.py with the product of the following arguments:
      1. type of model: Iterative, Direct, TarNet, ULearner, CausalForest
      2. number of agents: 2, 5, 10, 20, 40, 87 in our experiments
      3. subset: drug_possession, misdemeanor_under35
    4. Figures 1, 3, and 4 compare metrics for the methods. They can be produced by running plot_semisynth.ipynb.
    5. Figure 2 examines tuning the region size. run_semisynth_exp_recoverbeta.ipynb is a stand-alone notebook for reproducing it.
    6. Figures 5 and 6 examine convergence of the iterative algorithm. They can be produced by running plot_convergence.ipynb.
    7. Figures 7 and 8 examine how robust the iterative algorithm and direct baselines are to violations of the assumption that there are two agent groups. First, run python3 run_robustness_semisynth_experiment.py with the product of the following arguments:
      1. type of model: Iterative, Direct
      2. number of groups: 2, 3, 5, 10
      3. subset: drug_possession, misdemeanor_under35 Note that the number of agents is fixed at 40. The figures can then be produced by running plot_robustness.ipynb.
    8. Note: Helper code that is called to generate semi-synthetic data is located in semisynth_subsets.py, semisynth_dataloader.py, and semisynth_dataloader_robust.py.
  3. The real-world diabetes experiment uses proprietary data extracted using generate_t2dm_cohort.sql and first_line.sql.

    1. Select an outcome model from logistic regressions, decision trees, and random forests based on AUC, calibration, and partial dependence plots. Figure 9 and the statistics in Table 2 that guided our selection of a random forest outcome model are produced in select_outcome_model_for_diabetes_experiment.ipynb.
    2. The experiment is run with python3 run_baseline_models.py diabetes Iterative DecisionTree RandomForest. Figure 10b, the information needed to create Figures 10a, the statistics in Tables 1 and 3, and the fold consistency evaluation will be outputted.
    3. Note: Data loading helper functions, including how data is split, are located in real_data_loader.py. Most of the functions called to generate the output are located in realdata_analysis.py.
  4. The real-world Parkinson's experiment was run using open-access data.

    1. Download the data from https://www.ppmi-info.org/.
    2. Run python3 ppmi_feature_extraction.py passing in the directory containing the downloaded raw data and directory where processed data will be outputted.
    3. Manually process the treatment data to correct for typos in the drug name and treatment date
    4. Run process_parkinsons_data.ipynb to gather the data for the experiment.
    5. The experiment is run with python3 run_baseline_models.py ppmi Iterative DecisionTree. The information for creating Figure 11 and Table 4 are outputted.
You might also like...
Code repo for
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax

[NeurIPS 2021] Galerkin Transformer: linear attention without softmax Summary A non-numerical analyst oriented explanation on Toward Data Science abou

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Owner
Sontag Lab
Machine learning algorithms and applications to health care.
Sontag Lab
Companion code for the paper "An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence" (NeurIPS 2021)

ReLU-GP Residual (RGPR) This repository contains code for reproducing the following NeurIPS 2021 paper: @inproceedings{kristiadi2021infinite, title=

Agustinus Kristiadi 4 Dec 26, 2021
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaël Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Dima Smirnov 28 Nov 18, 2022
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Facebook Research 462 Jan 3, 2023
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
Official code for On Path Integration of Grid Cells: Group Representation and Isotropic Scaling (NeurIPS 2021)

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling This repo contains the official implementation for the paper On Path Int

Ruiqi Gao 39 Nov 10, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022
[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

null 45 Nov 30, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

null 6 Dec 19, 2022