PyTorch Implement for Path Attention Graph Network

Related tags

Deep Learning SPAGAN
Overview

SPAGAN in PyTorch

This is a PyTorch implementation of the paper "SPAGAN: Shortest Path Graph Attention Network"

Prerequisites

We prefer to create a new conda environment to run the code.

PyTorch

Version >= 1.0.0

PyTorch-geometric

We use torch_geometric, torch_scatter and torch_sparse as backbone to implement the path attention mechanism. Please follow the official website to install them.

networkx

We use networkx to load the graph dataset.

graph-tool

We use graph-tool for fast APSP calculation. Please follow the official website to install.

Run the code:

Activating the corresponding conda env if you use a new conda environment.

python train_spgat.py
Comments
  • error:

    error:

    error:

    self._PropertyMap__map[k] = v Boost.Python.ArgumentError: Python argument types in EdgePropertyMap<double>.__setitem__(EdgePropertyMap<double>, tuple, numpy.float32) did not match C++ signature: __setitem__(graph_tool::PythonPropertyMap<boost::checked_vector_property_map<double, boost::adj_edge_index_property_map<unsigned long> > > {lvalue}, graph_tool::PythonEdge<boost::filt_graph<boost::undirected_adaptor<boost::adj_list<unsigned long> >, graph_tool::detail::MaskFilter<boost::unchecked_vector_property_map<unsigned char, boost::adj_edge_index_property_map<unsigned long> > >, graph_tool::detail::MaskFilter<boost::unchecked_vector_property_map<unsigned char, boost::typed_identity_property_map<unsigned long> > > > const>, double) During handling of the above exception, another exception occurred:

    self._PropertyMap__map[k] = self._PropertyMap__convert(v) Boost.Python.ArgumentError: Python argument types in EdgePropertyMap<double>.__setitem__(EdgePropertyMap<double>, tuple, float) did not match C++ signature: __setitem__(graph_tool::PythonPropertyMap<boost::checked_vector_property_map<double, boost::adj_edge_index_property_map<unsigned long> > > {lvalue}, graph_tool::PythonEdge<boost::filt_graph<boost::undirected_adaptor<boost::adj_list<unsigned long> >, graph_tool::detail::MaskFilter<boost::unchecked_vector_property_map<unsigned char, boost::adj_edge_index_property_map<unsigned long> > >, graph_tool::detail::MaskFilter<boost::unchecked_vector_property_map<unsigned char, boost::typed_identity_property_map<unsigned long> > > > const>, double)

    @ihollywhy Could you help me with this? thank you so much! I'm not so sure if it's a 'version' problem again but I do believe if I can have some information about the exact packages' version you used would be helpful Thanks again

    Originally posted by @scarletocean in https://github.com/ihollywhy/SPAGAN/issues/4#issuecomment-754764757

    opened by gllspeed 2
  • Question about the size of receptive field of SPAGAN

    Question about the size of receptive field of SPAGAN

    As I understand, if the SPAGAN has two layers and the max value of c is three for the first layer and two for the last layer, the size of receptive field of this SPAGAN is 5=3+2. While, according to your paper Section 5.3, 3-layer GAT has the same size of receptive field with SPAGAN. It seems the size of receptive field of SPAGAN is 3.

    So, how to calculate the size of receptive field? why it is 3 rather than 5?

    opened by BenchengY 2
  • I have a question about the paper!

    I have a question about the paper!

    Thanks for reading. GAT stacks layer for larger receptive field. But SPAGAN obviously does not need to do that. But why does the SPAGAN still stack layers. I would appreciate it if someone could reply to me.

    opened by simonZlz 1
  • Error while running test

    Error while running test

    Traceback (most recent call last): e_rowsum = spmm(edge, edge_e, N, torch.ones(size=(N,1)).cuda()) # e_rowsum: N x 1 TypeError: spmm() missing 1 required positional argument: 'matrix'

    opened by scarletocean 4
  • test on Citeseer dataset

    test on Citeseer dataset

    Hi, @ihollywhy ,

    I see you mentioned in the paper that on Citeseer dataset we needed to adjust the learning rate and the value of weight attenuation, which had reached a relatively better result. However, according to your parameter setting, the accuracy rate was only 18%,……,Could you tell me how to solve it?

    opened by Zoe-18 0
Owner
Yang Yiding
Ph.D. student at Stevens Institute of Technology
Yang Yiding
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 4, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

null 15 Dec 27, 2022
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 519 Jan 2, 2023
Graph neural network message passing reframed as a Transformer with local attention

Adjacent Attention Network An implementation of a simple transformer that is equivalent to graph neural network where the message passing is done with

Phil Wang 49 Dec 28, 2022
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention

E(n)-Equivariant Transformer (wip) Implementation of E(n)-Equivariant Transformer, which extends the ideas from Welling's E(n)-Equivariant G

Phil Wang 132 Jan 2, 2023
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Implementation of Heterogeneous Graph Attention Network

HetGAN Implementation of Heterogeneous Graph Attention Network This is the code repository of paper "Prediction of Metro Ridership During the COVID-19

null 5 Dec 28, 2021
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 2, 2022
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ([email protected]), Tiezheng Wang (wtz920729

null 7 Jan 3, 2023
Use tensorflow to implement a Deep Neural Network for real time lane detection

LaneNet-Lane-Detection Use tensorflow to implement a Deep Neural Network for real time lane detection mainly based on the IEEE IV conference paper "To

MaybeShewill-CV 1.9k Jan 8, 2023
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

mandos 43 Dec 7, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

Path-Generator-QA This is a Pytorch implementation for the EMNLP 2020 (Findings) paper: Connecting the Dots: A Knowledgeable Path Generator for Common

Peifeng Wang 33 Dec 5, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

null 7 Feb 10, 2022
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

Benedek Rozemberczki 259 Dec 5, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 9, 2022