PyTorch implementation of "Simple and Deep Graph Convolutional Networks"

Related tags

Deep Learning GCNII
Overview

Simple and Deep Graph Convolutional Networks

PWC PWC PWC PWC

This repository contains a PyTorch implementation of "Simple and Deep Graph Convolutional Networks".(https://arxiv.org/abs/2007.02133)

Dependencies

  • CUDA 10.1
  • python 3.6.9
  • pytorch 1.3.1
  • networkx 2.1
  • scikit-learn

Datasets

The data folder contains three benchmark datasets(Cora, Citeseer, Pubmed), and the newdata folder contains four datasets(Chameleon, Cornell, Texas, Wisconsin) from Geom-GCN. We use the same semi-supervised setting as GCN and the same full-supervised setting as Geom-GCN. PPI can be downloaded from GraphSAGE.

Results

Testing accuracy summarized below.

Dataset Depth Metric Dataset Depth Metric
Cora 64 85.5 Cham 8 62.48
Cite 32 73.4 Corn 16 76.49
Pubm 16 80.3 Texa 32 77.84
Cora(full) 64 88.49 Wisc 16 81.57
Cite(full) 64 77.13 PPI 9 99.56
Pubm(full) 64 90.30 obgn-arxiv 16 72.74

Usage

  • To replicate the semi-supervised results, run the following script
sh semi.sh
  • To replicate the full-supervised results, run the following script
sh full.sh
  • To replicate the inductive results of PPI, run the following script
sh ppi.sh

Reference implementation

The PyG folder includes a simple PyTorch Geometric implementation of GCNII. Requirements: torch-geometric >= 1.5.0 and ogb >= 1.2.0.

  • Running examples
python cora.py
python arxiv.py

Citation

@article{chenWHDL2020gcnii,
  title = {Simple and Deep Graph Convolutional Networks},
  author = {Ming Chen, Zhewei Wei and Zengfeng Huang, Bolin Ding and Yaliang Li},
  year = {2020},
  booktitle = {Proceedings of the 37th International Conference on Machine Learning},
}
Comments
  • The Cora dataset maybe wrong

    The Cora dataset maybe wrong

    Hi authors, I have read your paper, which is quite interesting. Thank you for your great work.

    But I have a question about the split of Cora Dataset.

    I count the node number of train_mask, val_mask, test_mask in https://github.com/chennnM/GCNII/blob/ca91f5686c4cd09cc1c6f98431a5d5b7e36acc92/process.py#L157 which are 1192, 796, 497. The sum of nodes [train_mask, val_mask, test_mask] is not 2,708, which is different from nodes shown in your paper.

    You can reproduce this phenomenon by the code: print('train_mask is %s' %train_mask.numpy().sum()) print('val_mask is %s' %val_mask.numpy().sum()) print('test_mask is %s' % test_mask.numpy().sum())

    I don't understand why this happen. Could you please point out? Hope for your response. Thanks!

    opened by chencsgit 3
  • Possible Accuracy Difference between PyG and PyTorch implementation on Cora

    Possible Accuracy Difference between PyG and PyTorch implementation on Cora

    Hi @chennnM

    Thanks for sharing the GCNII code.

    I've found there is some accuracy gap between PyG implementation and pure PyTorch implementation on Cora dataset. See below table for details.

    | Cora | Layers | Metric | |---------|--------|--------| | PyTorch | 64 | 88.49 | | PyG | 64 | 85.00 |

    Is there any possible parameter setting/code difference might cause the accuracy gap?

    opened by mingloo 3
  • Questions about 100 runs

    Questions about 100 runs

    Great work!We are reproducing the statistical results in Table 2 and want to know how to design the 100 runs mentioned in the article for GCN and GCNII. Use random seeds and fixed hyperparameters in Table 6? Or some other method to select 100 runs?

    opened by tpy1990wy 2
  • Question about the Table 2 and 3, i.e., results on the well-known datasets and with standard split

    Question about the Table 2 and 3, i.e., results on the well-known datasets and with standard split

    The results seems to be very good, cons. We have re-run the code and get the similar results with fixing the seed=42. We wish to cite this work with reporting the given results on these three standard datasets. Before that, there are some questions:

    1. The reported results in Table 2 are just with the fixed seed? or with random seeds and run many times?
    2. If you choose the best model (with hyper-parameters) based on the validation set in the experiments in Table 2, what's the setting of Table 3, like fixing the seed or others?

    thanks,

    opened by beckvision 2
  • Question above OGB implementation

    Question above OGB implementation

    Hey,

    I found the OBG implementation only has ``initial connection'' but without identity mapping.

    Furthermore, in the paper, you propose to use an APPNP like propagation method, i.e.,

    (1-alpha)LH^{(\ell-1)} + \alpha X,

    but in the implementation you are using

    LH^{(\ell-1)} + alpha X.

    Do I miss something? Or there is a specific reason you do this?

    opened by CongWeilin 0
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    hello, when I python full-supervised.py, the error message occurs: cuda:0 pretrained/673e5d3d5b814bfca489d053f0f3a92f.pt Traceback (most recent call last): File "full-supervised.py", line 117, in acc_list.append(train(datastr,splitstr)) File "full-supervised.py", line 73, in train features = features.to(device) RuntimeError: CUDA error: out of memory

    my cuda10.0 torch1.3.1+cu100,python3.6, do you know how to solve it?

    opened by tanjia123456 0
  • Questions about the conclusion

    Questions about the conclusion

    Hi authors, I have read your paper recently.But I have some questions about the conclusions you get in the paper.How to get the conclusion that the convergence rate of node is related to degree from theorem 1?Except for the stability vector π,the part of the formula that represents the convergence process seems to have nothing to do with degree.Could you please point out? Thanks!

    opened by Tifal077 0
  • The results of Cora

    The results of Cora

    Hi! I have reproduced your paper recently, it's an interesting paper. But I get an 83.6% accuracy for Cora with your code, the parameters are the same as you provided. I am curious about the reason.

    I'm looking forward to your reply! Thank you!

    opened by sxu-yaokx 1
  • Question about Lemma 1 in

    Question about Lemma 1 in "A.2. Proof of Theorem 1"

    The paper states that Lemma 1 is taken from (Chung, 2007). However, the closest thing I found in (Chung, 2007) is Lemma 3 ( inequalities (51), (52) ), in which the upper bound contains (beta_k)^2/8 not (lambda_G)^2/2. I tried to derives Lemma 1 in the paper from Lemma 3 in (Chung, 2007) but have not succeeded.

    I would really appreciate if you can provide a proof or point out any sources that lead to the proof of Lemma 1 in the paper. Thank you!.

    (Chung, 2007): Four proofs for the cheeger inequality and graph partition algorithms, ICCM 2007

    opened by phucdoitoan 0
  • Identity mapping

    Identity mapping

    Hi, I want to test the performance of adding identity mapping. I try to comment out output = theta*torch.mm(support, self.weight)+(1-theta)*r, and add the line output = support. Now it's APPNP, right? But the result is the same as before 85.7% , why?

    opened by GNN-zl 0
Owner
chenm
chenm
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 7, 2023
Fang Zhonghao 13 Nov 19, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022