DIR-GNN
"Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022) aims to train intrinsic interpretable Graph Neural Networks that are generalizable to out-of-distribution datasets. The core of this work lies in the construction of environments, i.e., interventional distributions, and thus discovering the causal features for rationalization.
Installation
- Main packages: PyTorch >= 1.5.0, Pytorch Geometric >= 1.7.0, OGB >= 1.3.0.
- See
requirements.txt
for other packages.
Data download
- Spurious-Motif: this dataset can be generated via
spmotif_gen/spmotif.ipynb
. - Graph-SST2: this dataset can be downloaded here.
- MNIST-75sp: this dataset can be downloaded here. Download
mnist_75sp_train.pkl
,mnist_75sp_test.pkl
, andmnist_75sp_noise.pt
to the directorydata/MNISTSP/raw/
. - OGBG-Molhiv: this dataset will be downloaded automatically.
Run DIR
The hyper-parameters used to train the intrinsic interpretable models are set as default in the argparse.ArgumentParser
in the training files. Feel free to change them if needed. We use separate files to train each dataset as the graph convolutional layers of the rationale generators are different.
Simply run python spmotif_dir.py
to reproduce the results in the paper.
(TODO)
Reference
@inproceedings{
shirley2022dir,
title={Discovering Invariant Rationales for Graph Neural Networks},
author={Ying-Xin Wu and Xiang Wang and An Zhang and Xiangnan He and Tat-seng Chua},
booktitle={ICLR},
year={2022},
}