Code for SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes (NeurIPS 2021)

Overview

SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes (NeurIPS 2021)

SyncTwin is a treatment effect estimation method tailored for observational studies with longitudinal data. Specifically, it applies to the LIP setting: Longitudinal, Irregular and Point treatment. In these studies, the covariates are observed at irregular intervals leading to treatment allocation; the outcomes are measured longitudinally before and after the treatment; the treatment is assigned at a specific time point and stays unchanged during the study.

The key insight of SyncTwin is to fully leverage the pre-treatment outcomes. It uses the temporal structure in the outcome time series to improve the accuracy of counterfactual prediction. It further uses the pre-treatment outcomes to control the estimation error on the individual level. Finally, the method enables interpretability by example: the user can examine the key contributing examples that leads to the estimate.

Installation

To run the code locally, make sure to first install the required python packages specified in requirements.txt. Python 3.7 is recommended for best compatibility. Note that tensorflow and GPy are only needed for running the benchmarks. The directory clairvoyance contains a streamlined version of the clairvoyance library. It is used to run the benchmarks CRN and RMSN.

For some benchmarks (SC, MC-NNM, 1NN), we use their public implementations in the R language. To run these benchmarks, please install R and the dependencies listed in requirements_R.txt.

For coda users, an environment YAML file environment.yml is provided, which includes both Python and R dependencies.

Usage

Scripts for reproducing paper experiments are provided under the directory experiments/.

The reproduce_all.sh shell script contains commands to reproduce all tables and figures in the paper. The Fig[x].sh or Tab[x].sh shell script contain commands to generate results for individual figures or tables. The Fig[x].ipynb notebooks contain commands to create the visualizations. The results will be written in the results folder. For instance, Tab2_C1_MAE.txt corresponds to the first Column of Table 2.

An implementation of SyncTwin is provided in the file SyncTwin.py. Note that SyncTwin is a general framework agnostic to the exact architectural choice of encoder and decoder. In this implementation, we use attentive GRU-D encoder and time-LSTM decoder. In the simulations, SyncTwin is trained in pkpd_sim3_model_training.py.

Citation

If you find the software useful, please consider citing the following paper:

@inproceedings{synctwin2021,
  title={SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes},
  author={Qian, Zhaozhi and Zhang, Yao and Bica, Ioana and Wood, Angela and van der Schaar, Mihaela},
  booktitle={Advances in neural information processing systems},
  year={2021}
}

License

Copyright 2021, Zhaozhi Qian.

This software is released under the MIT license.

You might also like...
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

Code for
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Code for
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Code for
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

Owner
Zhaozhi Qian
Zhaozhi Qian
CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable.

CausalNLP CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable. Install pip install -U

Arun S. Maiya 95 Jan 3, 2023
SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data

SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data Au

null 14 Nov 28, 2022
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 3, 2022
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

?? quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding ?? Installation $ git clone [email protected]

Andrew Jesson 19 Jun 23, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022