A Temporal Extension Library for PyTorch Geometric

Overview


PyPI Version Docs Status Repo size Code Coverage Build Status

Documentation | External Resources | Datasets

PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric.

The library consists of various dynamic and temporal geometric deep learning, embedding, and spatio-temporal regression methods from a variety of published research papers. In addition, it consists of an easy-to-use dataset loader and iterator for dynamic and temporal graphs, gpu-support. It also comes with a number of benchmark datasets with temporal and dynamic graphs (you can also create your own datasets).


Citing

If you find PyTorch Geometric Temporal and the new datasets useful in your research, please consider adding the following citation:

@misc{pytorch_geometric_temporal,
      author = {Benedek, Rozemberczki and Paul, Scherer},
      title = {{PyTorch Geometric Temporal}},
      year = {2020},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/benedekrozemberczki/pytorch_geometric_temporal}},
}

A simple example

PyTorch Geometric Temporal makes implementing Dynamic and Temporal Graph Neural Networks quite easy - see the accompanying tutorial. For example, this is all it takes to implement a recurrent graph convolutional network with two consecutive graph convolutional GRU cells and a linear layer:

import torch
import torch.nn.functional as F
from torch_geometric_temporal.nn.recurrent import GConvGRU

class RecurrentGCN(torch.nn.Module):

    def __init__(self, node_features, num_classes):
        super(RecurrentGCN, self).__init__()
        self.recurrent_1 = GConvGRU(node_features, 32, 5)
        self.recurrent_2 = GConvGRU(32, 16, 5)
        self.linear = torch.nn.Linear(16, num_classes)

    def forward(self, x, edge_index, edge_weight):
        x = self.recurrent_1(x, edge_index, edge_weight)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.recurrent_2(x, edge_index, edge_weight)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.linear(x)
        return F.log_softmax(x, dim=1)

Methods Included

In detail, the following temporal graph neural networks were implemented.

Discrete Recurrent Graph Convolutions

Temporal Graph Convolutions

Auxiliary Graph Convolutions


Head over to our documentation to find out more about installation, creation of datasets and a full list of implemented methods and available datasets. For a quick start, check out the examples in the examples/ directory.

If you notice anything unexpected, please open an issue. If you are missing a specific method, feel free to open a feature request.


Installation

PyTorch 1.7.0

To install the binaries for PyTorch 1.7.0, simply run

$ pip install torch-scatter==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.7.0.html
$ pip install torch-sparse==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.7.0.html
$ pip install torch-cluster==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.7.0.html
$ pip install torch-spline-conv==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.7.0.html
$ pip install torch-geometric
$ pip install torch-geometric-temporal

where ${CUDA} should be replaced by either cpu, cu92, cu101, cu102 or cu110 depending on your PyTorch installation.

cpu cu92 cu101 cu102 cu110
Linux
Windows
macOS

PyTorch 1.6.0

To install the binaries for PyTorch 1.6.0, simply run

$ pip install torch-scatter==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.6.0.html
$ pip install torch-sparse==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.6.0.html
$ pip install torch-cluster==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.6.0.html
$ pip install torch-spline-conv==latest+${CUDA} -f https://pytorch-geometric.com/whl/torch-1.6.0.html
$ pip install torch-geometric
$ pip install torch-geometric-temporal

where ${CUDA} should be replaced by either cpu, cu92, cu101 or cu102 depending on your PyTorch installation.

cpu cu92 cu101 cu102
Linux
Windows
macOS

Running tests

$ python setup.py test

Running examples

$ cd examples
$ python gconvgru_example.py

License

Issues
  • temporal graph classification

    temporal graph classification

    I'm wondering if there is an example of classification carried out for data represented as temporal graphs?

    To be precise: in my use case I have for instance a sequence of 100 graphs representing together a single entity/object. Next, I have 200 entities. Is there an easy way to input those 200 data instances (with ground truth classification labels) to one of the numerous deep learning architectures implemented in this repo to obtain instance representations useful in the classification task? Or simply obtain predicted labels? Is there an example for this or did I miss it ? (apologies if I did).

    Thank you for your help in advance.

    opened by krzysztoffiok 25
  • Question about example code

    Question about example code

    Hi, I've been trying to reuse the example code, MPNNLSTM, GCLSTM, and GConvLSTM with my own dataset. My MSE errors are around .2 but the models do not return correct target values. When I run the examples the MSE is around 1. What does MSE represent? How should I structure data my data for these models? I have a graph with 20 nodes, 2 features per node, and 32 edges. The dataset has 360 separate graphs. My dataset looks similar to the one used in MPNNLSTM.

    opened by smatovski 14
  • [Bug?] the way to set GCN.weight in EvolveGCN.

    [Bug?] the way to set GCN.weight in EvolveGCN.

    Hi @benedekrozemberczki, I found that the gradient function may be lost in the implementation of EvolveGCN. The reason for this should be the following line of code: https://github.com/benedekrozemberczki/pytorch_geometric_temporal/blob/bd6fac946378507a729b5723e9ace46a9dc0cbe1/torch_geometric_temporal/nn/recurrent/evolvegcno.py#L69 And here is a demo try to verify it:

    import torch
    
    torch.manual_seed(123)
    
    
    class MyModel(torch.nn.Module):
        def __init__(self):
            super().__init__()
            self.toy_model = ToyModel(weight=True)
    
        def forward(self, x):
            W = self.toy_model.weight
            self.toy_model.weight = torch.nn.parameter.Parameter(W)
            return self.toy_model(x)
    
    
    class MyModelParam(torch.nn.Module):
        def __init__(self):
            super().__init__()
            self.W = torch.nn.parameter.Parameter(torch.Tensor(2, 3).fill_(0.5))
            self.toy_model = ToyModel(weight=False)
    
        def forward(self, x):
            W = self.W
            return self.toy_model(x, weight=W)
    
    
    class ToyModel(torch.nn.Module):
        def __init__(self, weight=True):
            super().__init__()
            if weight:
                self.weight = torch.nn.parameter.Parameter(torch.Tensor(2, 3).fill_(0.5))
            else:
                self.register_parameter('weight', None)
            # self.func = torch.nn.parameter.Parameter(torch.Tensor(2, 2).fill_(0.1))
    
        def forward(self, x, weight=None):
            if weight is not None:
                w = weight
            else:
                w = self.weight
            # x = self.func * x
            x = torch.matmul(x, w)
            return x
    
    
    x = torch.Tensor(1, 2).fill_(0.8)
    y = torch.Tensor(1, 3).fill_(0.7)
    print("################## Test 1: reset GCN.weight by nn.Parameter()##################")
    model = MyModel()
    print("toyModel weight before:")
    print(model.toy_model.weight)
    optim = torch.optim.SGD(model.parameters(), lr=1e-2)
    
    # print("x: {}".format(x))
    # print("y: {}".format(y))
    prediction = model(x)
    loss = (prediction - y).sum()
    loss.backward()
    optim.step()
    print("toyModel weight after:")
    print(model.toy_model.weight)
    
    ########################## Test 2
    print("################## Test 2: pass weight as a parameter during forward##################")
    model_param = MyModelParam()
    print("model param weight before:")
    print(model_param.W)
    optim_param = torch.optim.SGD(model_param.parameters(), lr=1e-2)
    x_param = torch.Tensor(1, 2).fill_(0.8)
    y_param = torch.Tensor(1, 3).fill_(0.7)
    # print("x_param: {}".format(x_param))
    # print("y_param: {}".format(y_param))
    prediction_param = model_param(x_param)
    loss_param = (prediction_param - y_param).sum()
    loss_param.backward()
    optim_param.step()
    print("model param weight after:")
    print(model_param.W)
    

    result is here:

    ################## Test 1: reset GCN.weight by nn.Parameter()##################
    toyModel weight before:
    Parameter containing:
    tensor([[0.5000, 0.5000, 0.5000],
            [0.5000, 0.5000, 0.5000]], requires_grad=True)
    toyModel weight after:
    Parameter containing:
    tensor([[0.5000, 0.5000, 0.5000],
            [0.5000, 0.5000, 0.5000]], requires_grad=True)
    ################## Test 2: pass weight as a parameter during forward##################
    model param weight before:
    Parameter containing:
    tensor([[0.5000, 0.5000, 0.5000],
            [0.5000, 0.5000, 0.5000]], requires_grad=True)
    model param weight after:
    Parameter containing:
    tensor([[0.4920, 0.4920, 0.4920],
            [0.4920, 0.4920, 0.4920]], requires_grad=True)
    
    Process finished with exit code 0
    

    We can see that if we set model.weight in each batch iter by nn.paramter.Parameter(), the weight has not been updated.
    One way to fix it should be to pass a new weight during every forward. But pyG does not seem to provide the corresponding interface.

    opened by maqy1995 10
  • Feature request for Graph-Structure-Learning GNN for temporal Graph

    Feature request for Graph-Structure-Learning GNN for temporal Graph

    Is there a plan to include Graph-Structure-Learning graph neural network models for temporal graphs? A few models with github repos include MTGNN(https://github.com/nnzhan/MTGNN) and Graph-WaveNet (https://github.com/nnzhan/Graph-WaveNet)

    opened by seifer08ms 7
  • Putting multiple graphs in one StaticGraphTemporalSignal iterator

    Putting multiple graphs in one StaticGraphTemporalSignal iterator

    I'm dealing with a basketball players trajectory forecasting dataset; it is both temporal and spatial with the shape: [number of plays, number of timesteps, player, position]. Each play is composed of several timesteps, so in total I would have number of plays * number of timesteps graphs.

    From reading documentation and source code it seems that the StaticGraphTemporalSignal accepts multiple graphs, but I'm not sure how to do that. Moreover, I cannot create an instance of this iterator without passing target labels, which is not what I aim for in this project; unless I pass the target labels as the features of the next step in the prediction.

    I'm not sure if my question is clear I would be glad to elaborate, I require help :)

    opened by omarsinnno 7
  • Not all tests pass on GPU environment

    Not all tests pass on GPU environment

    Hi all, just thought I would let you know that not all your tests pass when running on a machine with a GPU

    I was looking at the MTGNN test which started working once I set the device to CPU and wasn't working with the existing code

    
       #device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        device = torch.device("cpu")
    

    =========================================================================== short test summary info ============================================================================ FAILED test/convolutional_test.py::test_astgcn - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! FAILED test/convolutional_test.py::test_mstgcn - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! FAILED test/convolutional_test.py::test_gman - RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_conv2d_forward =================================================================== 3 failed, 33 passed in 67.57s (0:01:07) ==================================================================== (pytorch_latest_p36) sh-4.2$

    opened by attibalazs 7
  • Can I use mini-batch in DCRNN?

    Can I use mini-batch in DCRNN?

    In the example given in the documentation, you use DCRNN to process some data which has the format like [node_num, node_feature_dim]. I understand that, but the problem I want to solve now is a little bit complicated.

    I want to train a batch of graphs, which has same edge indexs but different node features. Imaging a bunch of nodes interact in a graph through a fixed topology and the feature of these node will change because of the interaction. Through sampling by time, I can get many graphs which have same topology and different node featuers.

    Now I want to compute the node features in the next timestep using there current features and the topology. The question is 'how to map the node feature of many graphs(which has same topology) in one forward propagation.' I think this might be solved by mini-batch, but I cant find things about that in the documention.

    I have tried to make the data fromat like [batch, node_num, node_features] but it doesn't work, The error is like this:

    RuntimeError: Tensors must have same number of dimensions: got 3 and 2

    So, Is there any way to solve this? thank you!

    opened by 3riccc 7
  • Combining multiple StaticGraphTemporalSignal objects into StaticGraphTemporalSignalBatch objects

    Combining multiple StaticGraphTemporalSignal objects into StaticGraphTemporalSignalBatch objects

    Hi,

    I've created a Dataset object which currently returns a single sequence (StaticGraphTemporalSignal) from a directory of many sequences. When feeding this to a DataLoader, I receive lists of length batch_size instead of batched objects. Is there a straightforward procedure of combining multiple StaticGraphTemporalSignal objects into a StaticGraphTemporalSignalBatch object? I have not found any examples of StaticGraphTemporalSignalBatch in use.

    Cheers.

    opened by petergroth 7
  •  Evolve GCN H example

    Evolve GCN H example

    Hello, I have a question regarding the example you provided. As per my understanding the example is showing how using EvolveGCN-H I am new to temporal graph neural, and what I got from EvolveGCN paper is that at each timestep the model will have a different snapshot of the graph but in the example only a static graph is given as an input to the model. Could you please advice and help on how it is exactly working?

    opened by Fatima-200159617 6
  • How to process METR-LA and PEMS Bay datasets?

    How to process METR-LA and PEMS Bay datasets?

    The feature field of METRLADatasetLoader and PemsBayDatasetLoader has 3 dimensions, while other data loaders like ChickenpoxDatasetLoader only have two. How to process batched input for the models in this case? When I use DCRNN as encoder, I got the following error: "Tensors must have same number of dimensions: got 3 and 2."

    opened by JiapengWu 6
  • Number of channels does not change model performance substantially

    Number of channels does not change model performance substantially

    Dear Benedek (and whoever else may read this and have an idea what's going wrong,

    I have been trying to use your implementation of the A3T-GCN model for traffic prediction as in the original paper for several months now. Although there is a certain success, there are some questions I was not able to answer through analysis of the code and it's execution. Therefore I hope you might be able to give me a hint on what I'm missing. The main concern is as follows:

    In addition to the A3T-GCN I also use your GMAN Implementation. The GMAN model outperforms the A3T-GCN model consistently and significantly. So far so good, it might be just better suited to our data. However, there is one thing I really don't get. In

    https://github.com/benedekrozemberczki/pytorch_geometric_temporal/blob/b8fab67b5745ad201c18c6403430a14f4cb5db94/examples/recurrent/a3tgcn_example.py#L19

    you assign a number of out channels larger than the number of target values to the A3T-GCN. It is perfectly clear to me why this does make sense to increase the number of trainable parameters and allow for different filters for different situations in the input data. If the number of output channels is set to low, the number of trainable parameters drops to values below 1k. This obviously hurts the representational capabilities of the network. Thus, I designed our own model as follows:

    self.a3tgcn = A3TGCN(in_channels=channels, out_channels=projection_factor * periods_out, periods=periods_in, improved=improved, cached=cached, add_self_loops=add_self_loops)
            
    self.preprocessing_layer = Identity()
            
    self.postprocessing_layer = Linear(projection_factor * periods_out, periods_out)
    

    As you can see, the "preprocessing layer" is only a placeholder, I tried some stuff there but to no avail. Improved, cached and add_self_loops are left at their default values, only the possibility to change them is included. The number of in_channels is governed by the number of features per timestep and node, its either 1, 2 or 4, mostly 2. Four periods are used as input and four shall be returned. The projection factor is then used to control the actual size of the model. The postprocessing_layer subsequently projects the output of the A3T-GCN onto the number of desired outputs, just like in the chickenpox example.

    However, the networks performance after training is pretty much the same independent of this size. I get RMSE's of about 12 for all model sizes while the GMAN is around 6. I analyzed the network outputs and it's not just the errors, but the actual outputs of the network that remain independent of size. In fact, both networks seem to somewhat reliably predict the average speed for most roads in the network, barely adapting to the current situation.

    Does anyone especially @benedekrozemberczki as you wrote the code, have an idea why there might be so little reaction to the input data in each time step? I will not look into this during the weekend, but next week I will be perfectly happy to run any test you suggest and provide implementations and data as far as possible. Thanks in advance to anyone willing to even think about this.

    Best Regards, MoRoBe

    opened by MoRoBe-Work 0
Releases(v_00041)
Owner
Benedek Rozemberczki
PhD candidate at The University of Edinburgh @cdt-data-science working on machine learning and data mining related to graph structured data.
Benedek Rozemberczki
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.1k Oct 23, 2021
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 8.5k Oct 22, 2021
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 38 Oct 19, 2021
Efficient 3D Backbone Network for Temporal Modeling

VoV3D is an efficient and effective 3D backbone network for temporal modeling implemented on top of PySlowFast. Diverse Temporal Aggregation and

null 84 Sep 18, 2021
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 9k Oct 23, 2021
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 32 Oct 18, 2021
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.3k Oct 18, 2021
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

null 34 Oct 20, 2021
Distance Encoding for GNN Design

Distance-encoding for GNN design This repository is the official PyTorch implementation of the DEGNN and DEAGNN framework reported in the paper: Dista

null 161 Oct 17, 2021
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 39 Aug 30, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 51.6k Oct 24, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

MilaGraph 53 Oct 19, 2021
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 27 Aug 23, 2021
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 38 Oct 7, 2021
[Official] Exploring Temporal Coherence for More General Video Face Forgery Detection(ICCV 2021)

Exploring Temporal Coherence for More General Video Face Forgery Detection(FTCN) Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen Accepted b

null 9 Oct 17, 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 35 Oct 10, 2021
Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Change is Everywhere Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei

Zhuo Zheng 46 Oct 22, 2021
ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021)

ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021) Project Page | Video | Paper | Data We present a novel metho

null 48 Oct 12, 2021