CAUSE: Causality from AttribUtions on Sequence of Events

Related tags

Deep Learning CAUSE
Overview

CAUSE: Causality from AttribUtions on Sequence of Events

How to Run

  1. Install dependency:

    conda env create -n <myenv> -f environment.yml
    conda activate <myenv>
  2. Run the scripts for individual datasets

./scripts/run_excitation.sh all
./scripts/run_inhibition.sh all
./scripts/run_syngergy.sh all
./scripts/run_iptv.sh all
./scripts/run_memetracker.sh all

If you find this repo useful, please consider to cite:

@inproceedings{zhang2020cause,
  title={Cause: Learning granger causality from event sequences using attribution methods},
  author={Zhang, Wei and Panum, Thomas and Jha, Somesh and Chalasani, Prasad and Page, David},
  booktitle={International Conference on Machine Learning},
  pages={11235--11245},
  year={2020},
  organization={PMLR}
}
You might also like...
Sequence to Sequence Models with PyTorch
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

An implementation of a sequence to sequence neural network using an encoder-decoder
An implementation of a sequence to sequence neural network using an encoder-decoder

Keras implementation of a sequence to sequence model for time series prediction using an encoder-decoder architecture. I created this post to share a

Sequence lineage information extracted from RKI sequence data repo
Sequence lineage information extracted from RKI sequence data repo

Pango lineage information for German SARS-CoV-2 sequences This repository contains a join of the metadata and pango lineage tables of all German SARS-

Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

Code and datasets for the paper
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events globally on daily to subseasonal timescales.

HeatNet HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events glob

Reporting and Visualization for Hazardous Events
Reporting and Visualization for Hazardous Events

Reporting and Visualization for Hazardous Events

Comments
  • where is the code for the proposed model cause?

    where is the code for the proposed model cause?

    thanks for sharing the code! i could identify the baseline methods such as HExp, RPPN in the pkg folder, but which file is the proposed model cause? it's ERPP just CAUSE?

    opened by mileyyao 2
  • Interpreting the Granger causality results

    Interpreting the Granger causality results

    I am using the code to find Granger causalities in a dataset that I have, and I would really appreciate the help here in interpreting the results. Are the Granger causalities reported in the matrix variable A_pred? However, the weights in this matrix are positive and negative. Does the positive weight refer to an excitatory effect and the negative to an inhibitory effect between the event types? Thank you in advance for the help!

    opened by NOOR-JAMALUDEEN 0
  • ModuleNotFoundError: No module named 'sklearn.externals.joblib

    ModuleNotFoundError: No module named 'sklearn.externals.joblib

    Scripts throw "ModuleNotFoundError: No module named 'sklearn.externals.joblib'" error

    Traceback (most recent call last): File "/Users/user1/MyGitRepo/CAUSE/tasks/train.py", line 269, in model = get_model(args, n_types) File "/Users/user1/MyGitRepo/CAUSE/tasks/train.py", line 61, in get_model from tick.hawkes import HawkesExpKern File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/hawkes/init.py", line 3, in from .model import ( File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/hawkes/model/init.py", line 4, in import tick.base_model.build.base_model File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/base_model/init.py", line 5, in from .model_labels_features import ModelLabelsFeatures File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/base_model/model_labels_features.py", line 6, in from tick.preprocessing.utils import safe_array File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/preprocessing/init.py", line 4, in from .longitudinal_features_product import LongitudinalFeaturesProduct File "/Users/user1/opt/miniconda3/envs/Cause/lib/python3.7/site-packages/tick/preprocessing/longitudinal_features_product.py", line 8, in from sklearn.externals.joblib import Parallel, delayed ModuleNotFoundError: No module named 'sklearn.externals.joblib'

    opened by satcos 2
Owner
Wei Zhang
ML PhD@UW-Madison
Wei Zhang
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Tackgeun 21 Nov 20, 2022
Ian Covert 130 Jan 1, 2023
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 3, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
null 571 Dec 25, 2022
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023