An implementation of Deep Graph Infomax (DGI) in PyTorch

Overview

DGI

Deep Graph Infomax (Veličković et al., ICLR 2019): https://arxiv.org/abs/1809.10341

Overview

Here we provide an implementation of Deep Graph Infomax (DGI) in PyTorch, along with a minimal execution example (on the Cora dataset). The repository is organised as follows:

  • data/ contains the necessary dataset files for Cora;
  • models/ contains the implementation of the DGI pipeline (dgi.py) and our logistic regressor (logreg.py);
  • layers/ contains the implementation of a GCN layer (gcn.py), the averaging readout (readout.py), and the bilinear discriminator (discriminator.py);
  • utils/ contains the necessary processing subroutines (process.py).

Finally, execute.py puts all of the above together and may be used to execute a full training run on Cora.

Reference

If you make advantage of DGI in your research, please cite the following in your manuscript:

@inproceedings{
velickovic2018deep,
title="{Deep Graph Infomax}",
author={Petar Veli{\v{c}}kovi{\'{c}} and William Fedus and William L. Hamilton and Pietro Li{\`{o}} and Yoshua Bengio and R Devon Hjelm},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=rklz9iAcKQ},
}

License

MIT

Comments
  • Out of Memory on Pubmed Dataset

    Out of Memory on Pubmed Dataset

    I tried to run the released execute.py on Pubmed. However, it seems that it takes 19.25 GB during back propagation.

    Is this the correct behavior? Is there any solution to bypass this problem and replicate the paper reported number?

    opened by Tiiiger 6
  • Mean accuracy on fixed representation

    Mean accuracy on fixed representation

    HI, great work ! But I have a question regarding how to repeat the experiments. While getting the mean accuracy, you used the fixed representation and only repeated the logistic regression part. Shall we repeat the whole pretext training + downstream task altogether ? Or is there any reference for the reason of doing so ? Many thanks

    opened by nihuo76 2
  • About the meaning of learned features

    About the meaning of learned features

    Hi, I was wondering that the learned representations tend to conserve their unique information or common information. Maximizing Mutual information between patch and summary vector is to find more information between them, but the discriminator wants to distinguish the samples. So, I am confused.

    opened by Fujiaoji 2
  • sigmoid function is missing in layers/discriminator.py

    sigmoid function is missing in layers/discriminator.py

    It seems that the sigmoid function is missing in layers/discriminator.py[line30]. Explained in your paper, the logistic sigmoid nonlinearity is used to convert scores into probabilities of (h_i, s) being a positive example.

    opened by liangxun 1
  • Codes on reddit dataset

    Codes on reddit dataset

    Great work! I really enjoy reading the paper.

    However, will the codes to replicate the reported performance on reddit be released? If so, is there a planned schedule?

    Thank you!

    opened by Tiiiger 1
  • Question about PPI dataset

    Question about PPI dataset

    Hi,

    Thank you for making the code available. I would like to ask a question about a remark you made in your paper about the PPI dataset. On page 8, paragraph "Inductive learning on multiple graphs", you noted that 40% of the node has all-zero feature vector. However, the feature vectors loaded using GraphSAGE (http://snap.stanford.edu/graphsage/) is dense. Did you use a different set of feature vectors or a different PPI dataset?

    Thank you for your time! If I misunderstood something, please kindly point out my mistake.

    Edit: Sorry I made a mistake when I check the feature vectors. It was indeed 42% all zeros.

    opened by gear 0
  • Corruption function only on node features X not graph structure A?

    Corruption function only on node features X not graph structure A?

    Hello, the paper said: "an explicit (stochastic) corruption function (\tilde{X}, \tilde{A}) = C(X, A)." However, in the code, I only find the corruption on node attributes: idx = np.random.permutation(nb_nodes) shuf_fts = features[:, idx, :]

    I can not find the corruption on the graph structure, why? Does this not affect the final result?

    thanks

    opened by zhengwang100 0
  • If I want to add features for my edges, can I simply replace the encoder network with a network that supports edge_attr?

    If I want to add features for my edges, can I simply replace the encoder network with a network that supports edge_attr?

    Hi Petar, I have read your idea abot Deep Graph Infomax. While right now something really puzzles me: if I want to add features for my edges and I still want to do unsupervised or self-supervised learning. Can I just simply change the encoder and remain all the other function unchanged? Such as corruption function or readout function?

    opened by ezio0218 0
  • Why do we need to calculate for expectation before sum

    Why do we need to calculate for expectation before sum

       Hello, I've read your wonderful paper published on ICLR, and I'd like to consult you some questions. 
       The two summation symbols in the objective function sum the positive and negative samples and find the average value. Why do you need to calculate the expectation before sum?
       thank you!
    
    opened by liujinxin1 0
  • Help!

    Help!

    I saw the DGI implementation which only contains Cora dataset on your github

    I was wondering whether you could kindly share your DGI implementation which contains Reddit and PPI dataset!

    Thank you very much!

    opened by jiangshunyu 3
Owner
Petar Veličković
Staff Research Scientist
Petar Veličković
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022
A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019).

ClusterGCN ⠀⠀ A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019). A

Benedek Rozemberczki 697 Dec 27, 2022
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
A PoC Corporation Relationship Knowledge Graph System on top of Nebula Graph.

Corp-Rel is a PoC of Corpartion Relationship Knowledge Graph System. It's built on top of the Open Source Graph Database: Nebula Graph with a dataset

Wey Gu 20 Dec 11, 2022
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

null 212 Dec 25, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
EGNN - Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication. This

Phil Wang 259 Jan 4, 2023