PyTorch implementation of spectral graph ConvNets, NIPS’16

Overview

Graph ConvNets in PyTorch

October 15, 2017

Xavier Bresson

http://www.ntu.edu.sg/home/xbresson
https://github.com/xbresson
https://twitter.com/xbresson

Description

Prototype implementation in PyTorch of the NIPS'16 paper:
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
M Defferrard, X Bresson, P Vandergheynst
Advances in Neural Information Processing Systems, 3844-3852, 2016
ArXiv preprint: arXiv:1606.09375

Code objective

The code provides a simple example of graph ConvNets for the MNIST classification task.
The graph is a 8-nearest neighbor graph of a 2D grid.
The signals on graph are the MNIST images vectorized as $28^2 \times 1$ vectors.

Installation

git clone https://github.com/xbresson/graph_convnets_pytorch.git
cd graph_convnets_pytorch
pip install -r requirements.txt # installation for python 3.6.2
python check_install.py
jupyter notebook # run the 2 notebooks

Results

GPU Quadro M4000

  • Standard ConvNets: 01_standard_convnet_lenet5_mnist_pytorch.ipynb, accuracy= 99.31, speed= 6.9 sec/epoch.
  • Graph ConvNets: 02_graph_convnet_lenet5_mnist_pytorch.ipynb, accuracy= 99.19, speed= 100.8 sec/epoch

Note

PyTorch has not yet implemented function torch.mm(sparse, dense) for variables: https://github.com/pytorch/pytorch/issues/2389. It will be certainly implemented but in the meantime, I defined a new autograd function for sparse variables, called "my_sparse_mm", by subclassing torch.autograd.function and implementing the forward and backward passes.

class my_sparse_mm(torch.autograd.Function):
    """
    Implementation of a new autograd function for sparse variables, 
    called "my_sparse_mm", by subclassing torch.autograd.Function 
    and implementing the forward and backward passes.
    """
    
    def forward(self, W, x):  # W is SPARSE
        self.save_for_backward(W, x)
        y = torch.mm(W, x)
        return y
    
    def backward(self, grad_output):
        W, x = self.saved_tensors 
        grad_input = grad_output.clone()
        grad_input_dL_dW = torch.mm(grad_input, x.t()) 
        grad_input_dL_dx = torch.mm(W.t(), grad_input )
        return grad_input_dL_dW, grad_input_dL_dx

When to use this algorithm?

Any problem that can be cast as analyzing a set of signals on a fixed graph, and you want to use ConvNets for this analysis.




Comments
  • loss_train=loss.data[0] ERROR

    loss_train=loss.data[0] ERROR

    I am getting the following error. Can you please help? File "full_gcn.py", line 365, in loss_train = loss.data[0] IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

    opened by rishotics 0
  • What is the need of the linear layer after Chbyshev multiplications ?

    What is the need of the linear layer after Chbyshev multiplications ?

    Hello @xbresson ,

    Thank you for your work and for making available pytorch version of GCN.

    I have question relate to graph_conv_cheby(self, x, cl, L, lmax, Fout, K) function (https://github.com/xbresson/spectral_graph_convnets/blob/master/02_graph_convnet_lenet5_mnist_pytorch.ipynb)

    1. What is the need of linear layer x = cl(x) after the Chebyshev multiplications ?
    2. What if we don't use the linear layer and keep only the result of chebyshev multiplication as follow
    
    if K > 1: 
                x1 = my_sparse_mm()(L,x0)              # V x Fin*B
                x = torch.cat((x, x1.unsqueeze(0)),0)  # 2 x V x Fin*B
            for k in range(2, K):
                x2 = 2 * my_sparse_mm()(L,x1) - x0  
                x = torch.cat((x, x2.unsqueeze(0)),0)  # M x Fin*B
                x0, x1 = x1, x2  
            
            x = x.view([K, V, Fin, B])           # K x V x Fin x B     
            x = x.permute(3,1,2,0).contiguous()  # B x V x Fin x K       
            x = x.view([B*V, Fin*K])    
    

    ? Thank you a lot for your answer

    opened by pinkfloyd06 3
  • Implementation ambiguity

    Implementation ambiguity

    Hello @xbresson ,

    Let me first thank for this work.

    l went through your implementation. Hence , l have some question :

    1- You set coarsening_levels = 4, however only L[0] and L[2] are used

    x = self.graph_conv_cheby(x, self.cl1, L[0], lmax[0], self.CL1_F, self.CL1_K) and x = self.graph_conv_cheby(x, self.cl2, L[2], lmax[2], self.CL2_F, self.CL2_K) What about L[1],L[3] and L[4] ?

    2- Why lmax[0] and lmax[2] are recalculated in graph_conv_cheby . Even if lmax is a parameter input of this function.

    3- Do you mind explaining why your rescale Laplacian eigenvalues to [-1,1] ? rescale_L(L, lmax)

    4- concat(x,x_) is not ued at all in graph_conv_cheby() :

            def concat(x, x_):
                x_ = x_.unsqueeze(0)  # 1 x V x Fin*B
                return torch.cat((x, x_), 0)  # K x V x Fin*B
    
    

    5- What if K=1 in graph_conv_cheby() ? l noticed that at least K should equal 2. How can l use that just for K=1 ?

    Thank you for your consideration

    opened by pinkfloyd06 3
  • How can l extend your code to handle irregular graph (not a grid) with variable number of edges ?

    How can l extend your code to handle irregular graph (not a grid) with variable number of edges ?

    Hello,

    l would like to know whether your code extends to irregular graph structure , such that :

    number of edges is variable (rather than number_edges = 8) and the graph is not a grid but an irregular graph (variable number of nodes and vertices. In your MNIST example you set grid_side = 28) Thank you

    opened by pinkfloyd06 1
  • something wrong with rescale_L

    something wrong with rescale_L

    To rescale Laplacian eigenvalues to [-1, 1], the formula should be $L' = 2L/l_max - I$. I think the 33-th line in corsening.py should be like follows:

    L /= lmax / 2
    
    opened by tadpole 1
Owner
Xavier Bresson
Xavier Bresson
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
Implementation for Simple Spectral Graph Convolution in ICLR 2021

Simple Spectral Graph Convolutional Overview This repo contains an example implementation of the Simple Spectral Graph Convolutional (S^2GC) model. Th

allenhaozhu 64 Dec 31, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 3, 2022
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Spectral Temporal Graph Neural Network (StemGNN in short) for Multivariate Time-series Forecasting

Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting This repository is the official implementation of Spectral Temporal Gr

Microsoft 306 Dec 29, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

null 2.8k Jan 4, 2023
RepVGG: Making VGG-style ConvNets Great Again

This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge,the paper is RepVGG: Making VGG-style ConvNets Great Again

Ty Feng 62 May 21, 2022
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 4, 2023
Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process

Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process, a complete algorithm library is established, which is named opensa (openspectrum analysis).

Fu Pengyou 50 Jan 7, 2023
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022