Deconfounding Temporal Autoencoder: Estimating Treatment Effects over Time Using Noisy Proxies

Related tags

Deep Learning DTA
Overview

Deconfounding Temporal Autoencoder (DTA)

This is a repository for the paper "Deconfounding Temporal Autoencoder: Estimating Treatment Effects over Time Using Noisy Proxies".

Python scripts

The following scripts are used to reproduce the results:

main.py (main script containing the implementation code)

sim_exp.py (reproducing the simulation experiments)

mimic_exp.py (reproducing the results with MIMIC-III)

The first two scripts are self contained. The last one uses the pre-processed MIMIC-III data set. The pre-processing pipeline is adopted from Wang et. al. (2020). See the references below.

Requirements

python 3.6 pytorch 1.7

Reproducing simulation results

Run the script sim_exp.py. The script contains synthetic data simulation and code that produces the results reported.

Reproducing MIMIC-III results

Run the script mimic_exp.py. The script requires the pre-processed MIMIC-III data as input (see references below).

MIMIC-III is a freely accessible database. However, access must be requested at https://physionet.org/content/mimiciii/1.4/. When MIMIC-III access is granted, the pre-processed data by Wang et. al. (2020) is accessible with instructions in the respective paper. We use this data set to study the effect of vasopressors and mechanical ventilation on two outcome variables: (diastolic) blood pressure and oxygen saturation.

We extract 2313 patients with 30 time steps each. We use the following covariates: heart rate, red blood cell count, sodium, mean blood pressure, systemic vascular resistence, glucose, chloride urine, glascow coma scale total, hematrocit, positive end-pressure set, respiratory rate, prothrombin time pt, cholesterol, hemoglobin, creatinine, blood urea nitrogen, bicarbonate, calcium ionized, partial pressure of carbon dioxide, magnesium, anion gap, phosphorous, venous pvo2, platelets, calcium urine.

References

Johnson, A. E., Pollard, T. J., Shen, L., Li-Wei, H. L., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3(1), 1-9.

Wang, S., McDermott, M. B., Chauhan, G., Ghassemi, M., Hughes, M. C., & Naumann, T. (2020, April). MIMIC-extract: A data extraction, preprocessing, and representation pipeline for MIMIC-III. In Proceedings of the ACM Conference on Health, Inference, and Learning (pp. 222-235).

You might also like...
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

This repository contains a re-implementation of the code for the CVPR 2021 paper
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Code for
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"

Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects

An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

PyTorch implementation of
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

Comments
  • Reproducing Figure 3

    Reproducing Figure 3

    I have read your paper with great interest and was looking into running the code in this repository on my own machine. Unfortunately, when running the code in sim_exp.py (or main.py) out-of-the-box I am unable to do so. Instead of a replicate of Figure 3a (marginal structural model), I get the following results:

    fig3a

    Two main differences immediately jump out:

    1. There is little difference between the three models i) with proxy confounders X only (base), ii) DTA deconfounded proxies (dta), and iii) an oracle with access to the true confounders L (oracle).
    2. The absolute value of MSE is similar to that reported in the paper for no confounding (gamma=0) but is much lower than reported for gamma=0.4 or gamma=0.8

    Due to computation time, I've only done full 10 seeds 50 hyperparameter search for MSM with the above three levels of gamma and rank=5 but I get comparable results on individual tries with different gammas, ranks and with RMSN.

    Do you maybe have any idea as to what might be going wrong?

    opened by prockenschaub 1
Owner
Milan Kuzmanovic
Milan Kuzmanovic
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation

Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation

Bae, Gwangbin 95 Jan 4, 2023
Official codes: Self-Supervised Learning by Estimating Twin Class Distribution

TWIST: Self-Supervised Learning by Estimating Twin Class Distributions Codes and pretrained models for TWIST: @article{wang2021self, title={Self-Sup

Bytedance Inc. 85 Dec 15, 2022
This is a Keras implementation of a CNN for estimating age, gender and mask from a camera.

face-detector-age-gender This is a Keras implementation of a CNN for estimating age, gender and mask from a camera. Before run face detector app, expr

Devdreamsolution 2 Dec 4, 2021
Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstrac

null 2 Apr 14, 2022
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable.

CausalNLP CausalNLP is a practical toolkit for causal inference with text as treatment, outcome, or "controlled-for" variable. Install pip install -U

Arun S. Maiya 95 Jan 3, 2023
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022