Code for "Multi-Time Attention Networks for Irregularly Sampled Time Series", ICLR 2021.

Overview

Multi-Time Attention Networks (mTANs)

This repository contains the PyTorch implementation for the paper Multi-Time Attention Networks for Irregularly Sampled Time Series by Satya Narayan Shukla and Benjamin M. Marlin. This work has been accepted at the International Conference on Learning Representations, 2021.

Requirements

The code requires Python 3.7 or later. The file requirements.txt contains the full list of required Python modules.

pip3 install -r requirements.txt

Training and Evaluation

  1. Interpolation Task on Toy Dataset
python3 tan_interpolation.py --niters 5000 --lr 0.0001 --batch-size 128 --rec-hidden 32 --latent-dim 1 --length 20 --enc mtan_rnn --dec mtan_rnn --n 1000  --gen-hidden 50 --save 1 --k-iwae 5 --std 0.01 --norm --learn-emb --kl --seed 0 --num-ref-points 20 --dataset toy
  1. Interpolation Task on PhysioNet Dataset
python3 tan_interpolation.py --niters 500 --lr 0.001 --batch-size 32 --rec-hidden 64 --latent-dim 16 --quantization 0.016  --enc mtan_rnn --dec mtan_rnn --n 8000  --gen-hidden 50 --save 1 --k-iwae 5 --std 0.01 --norm --learn-emb --kl --seed 0 --num-ref-points 64 --dataset physionet --sample-tp 0.9
  1. Classification Task on PhysioNet Dataset (mTAND-Full)
python3 tan_classification.py --alpha 100 --niters 300 --lr 0.0001 --batch-size 50 --rec-hidden 256 --gen-hidden 50 --latent-dim 20 --enc mtan_rnn --dec mtan_rnn --n 8000 --quantization 0.016 --save 1 --classif --norm --kl --learn-emb --k-iwae 1 --dataset physionet
  1. Classification Task on PhysioNet Dataset (mTAND-Enc)
python3 tanenc_classification.py --niters 200 --lr 0.0001 --batch-size 128 --rec-hidden 128 --enc mtan_enc --n 8000 --quantization 0.016 --save 1 --classif --num-heads 1 --learn-emb --dataset physionet --seed 0
  1. Classification Task on MIMIC-III Dataset (mTAND-Full)
python3 tan_classification.py --alpha 5 --niters 300 --lr 0.0001 --batch-size 128 --rec-hidden 256 --gen-hidden 50 --latent-dim 128 --enc mtan_rnn --dec mtan_rnn   --save 1 --classif --norm --learn-emb --k-iwae 1 --dataset mimiciii

For MIMIC-III Dataset, first you need to have an access to the dataset which can be requested here. We follow the data extraction process described here: https://github.com/mlds-lab/interp-net.

  1. Classification Task on MIMIC-III Dataset (mTAND-Enc)
python3 tanenc_classification.py --niters 200 --lr 0.0001 --batch-size 256 --rec-hidden 256 --enc mtan_enc  --quantization 0.016 --save 1 --classif --num-heads 1 --learn-emb --dataset mimiciii --seed 0
  1. Classification Task on Human Activity Dataset (mTAND-Enc)
python3 tanenc_classification.py --niters 1000 --lr 0.001 --batch-size 256 --rec-hidden 512 --enc mtan_enc_activity  --quantization 0.016 --save 1 --classif --num-heads 1 --learn-emb --dataset activity --seed 0 --classify-pertp

Interpolation Results

Interpolation performance on PhysioNet with varying percent of observed time points:

Classification Results

Classification performance on PhysioNet, MIMIC-III and Human activity dataset, and time per epoch in minutes for all the methods on PhysioNet.

Reference

@inproceedings{
shukla2021multitime,
title={Multi-Time Attention Networks for Irregularly Sampled Time Series},
author={Satya Narayan Shukla and Benjamin Marlin},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=4c0J6lwQ4_}
}
Comments
  • Doubts about the results of Interpolation on PhysioNet

    Doubts about the results of Interpolation on PhysioNet

    The MSE score of interpolation performance on PhysioNet with 90% data points observed is 4.798 ± 0.036 ×10^−3. But I get 1.2 ×10^−3 after I run python3 tan_interpolation.py --niters 500 --lr 0.001 --batch-size 32 --rec-hidden 64 --latent-dim 16 --quantization 0.016 --enc mtan_rnn --dec mtan_rnn --n 8000 --gen-hidden 50 --save 1 --k-iwae 5 --std 0.01 --norm --learn-emb --kl --seed 0 --num-ref-points 64 --dataset physionet --sample-tp 0.9.

    If the test data and prediction results are restored to the original scale according to the normalization of data preprocessing, then I get MSE of 4225.6895.

    So how can I get the results of the paper presented?

    opened by DingShizhe 1
  • Not executable without NVIDIA GPU

    Not executable without NVIDIA GPU

    When running the tan_interpolation.py instruction, I get the following:

    50078 (1000, 20) (1000, 20) (1000, 100) (1000, 20, 3) [[ 0.8516054 1. 0.09 ] [ 0.95830714 1. 0.2 ] [ 1.33468433 1. 0.25 ] [ 1.95121209 1. 0.37 ] [ 1.88823672 1. 0.39 ] [ 1.18921462 1. 0.46 ] [ 1.03273212 1. 0.47 ] [ 0.70050108 1. 0.49 ] [ 0.26575831 1. 0.64 ] [ 0.42114019 1. 0.69 ] [ 0.33637057 1. 0.72 ] [ 0.08979964 1. 0.77 ] [ 0.01292361 1. 0.79 ] [-0.01584657 1. 0.8 ] [-0.03790797 1. 0.81 ] [-0.05342294 1. 0.82 ] [-0.04604355 1. 0.87 ] [-0.03038688 1. 0.88 ] [-0.03038688 1. 0.88 ] [ 0.27021355 1. 0.99 ]] (800, 20, 3) (200, 20, 3) parameters: 49400 64381 Traceback (most recent call last): File "tan_interpolation.py", line 129, in out = rec(torch.cat((subsampled_data, subsampled_mask), 2), subsampled_tp)

    (some more callback traces)

    and then,

    AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

    Is there a way to run these scripts without having a NVIDIA GPU?

    opened by mzabaletasar 0
  • Command for executing

    Command for executing "Classification Task on Human Activity Dataset (mTAND-Full)"

    Thanks a lot for making the code available with clear instructions to execute for different experiments. Can you please also include in the github page, the command to execute for "Classification Task on Human Activity Dataset (mTAND-Full)". Currently, the github page only contains mTAND-Enc command for Human Activity Dataset "7. Classification Task on Human Activity Dataset (mTAND-Enc)"

    opened by SrividyaTR 0
  • Results are not reproducible on MIMIC-Ⅲ

    Results are not reproducible on MIMIC-Ⅲ

    I use the data extraction process to get the MIMIC-Ⅲ dataset which contains 53,211 records. I use the given code to split the train, valid and test set. And I use the hyperparameters given to run mTAN-full, but I do not achieve 0.8544 AUROC on MIMIC-Ⅲ dataset . The highest is AUROC ~0.838.

    This is achieved with this command: python3 tan_classification.py --alpha 5 --niters 300 --lr 0.0001 --batch-size 128 --rec-hidden 256 --gen-hidden 50 --latent-dim 128 --enc mtan_rnn --dec mtan_rnn --save 1 --classif --norm --learn-emb --k-iwae 1 --dataset mimiciii

    Classification Task on MIMIC-III Dataset (mTAND-Full).log

    opened by ChongWang2000 1
  • Results are not reproducible at all

    Results are not reproducible at all

    I was able able to run the code perfectly but the hyperparameters given for mTAN-full do not achieve 0.858 AUROC on physionet2012. The highest is AUROC ~0.827 with AUPRC ~0.45.

    This is achieved with this command: python3 tan_classification.py --alpha 100 --niters 300 --lr 0.0001 --batch-size 50 --rec-hidden 256 --gen-hidden 50 --latent-dim 20 --enc mtan_rnn --dec mtan_rnn --n 8000 --quantization 0.016 --save 1 --classif --norm --kl --learn-emb --k-iwae 1 --dataset physionet

    opened by CubicQubit 1
  • Not able to reproduce results for Physionet dataset

    Not able to reproduce results for Physionet dataset

    Hi,

    I have downloaded the code from the git repository and ran the mTAN classifier full on my local machine with the same hyperparameters mentioned. My results do not match with the published ones. In the paper, the mean ROC is 0.858 while I get 0.830.

    I could not install torch==1.4.0, hence used torch---1.9. I am attaching the result for your reference. physionet-cnn-0-569442.log

    opened by yalavarthivk 1
Owner
The Laboratory for Robust and Efficient Machine Learning
The Laboratory for Robust and Efficient Machine Learning
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Cristian Bodnar 50 Oct 28, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 7, 2021
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

VITA 156 Nov 28, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

Dongkwan Kim 127 Dec 28, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

null 72 Jan 3, 2023
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger ?? Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac

Shengyu Zhao 373 Jan 2, 2023