Graph Neural Networks with Keras and Tensorflow 2.

Overview

Welcome to Spektral

Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2. The main goal of this project is to provide a simple but flexible framework for creating graph neural networks (GNNs).

You can use Spektral for classifying the users of a social network, predicting molecular properties, generating new graphs with GANs, clustering nodes, predicting links, and any other task where data is described by graphs.

Spektral implements some of the most popular layers for graph deep learning, including:

and many others (see convolutional layers).

You can also find pooling layers, including:

Spektral also includes lots of utilities for representing, manipulating, and transforming graphs in your graph deep learning projects.

See how to get started with Spektral and have a look at the examples for some templates.

The source code of the project is available on Github.
Read the documentation here.

If you want to cite Spektral in your work, refer to our paper:

Graph Neural Networks in TensorFlow and Keras with Spektral
Daniele Grattarola and Cesare Alippi

Installation

Spektral is compatible with Python 3.5+, and is tested on Ubuntu 16.04+ and MacOS. Other Linux distros should work as well, but Windows is not supported for now.

The simplest way to install Spektral is from PyPi:

pip install spektral

To install Spektral from source, run this in a terminal:

git clone https://github.com/danielegrattarola/spektral.git
cd spektral
python setup.py install  # Or 'pip install .'

To install Spektral on Google Colab:

! pip install spektral

New in Spektral 1.0

The 1.0 release of Spektral is an important milestone for the library and brings many new features and improvements.

If you have already used Spektral in your projects, the only major change that you need to be aware of is the new datasets API.

This is a summary of the new features and changes:

  • The new Graph and Dataset containers standardize how Spektral handles data. This does not impact your models, but makes it easier to use your data in Spektral.
  • The new Loader class hides away all the complexity of creating graph batches. Whether you want to write a custom training loop or use Keras' famous model-dot-fit approach, you only need to worry about the training logic and not the data.
  • The new transforms module implements a wide variety of common operations on graphs, that you can now apply() to your datasets.
  • The new GeneralConv and GeneralGNN classes let you build models that are, well... general. Using state-of-the-art results from recent literature means that you don't need to worry about which layers or architecture to choose. The defaults will work well everywhere.
  • New datasets: QM7 and ModelNet10/40, and a new wrapper for OGB datasets.
  • Major clean-up of the library's structure and dependencies.
  • New examples and tutorials.

Contributing

Spektral is an open-source project available on Github, and contributions of all types are welcome. Feel free to open a pull request if you have something interesting that you want to add to the framework.

The contribution guidelines are available here and a list of feature requests is available here.

Comments
  • GNNExplainer: tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes

    GNNExplainer: tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes

    Hi Daniele and all,

    thanks for creating and maintaining this great library!

    I have been trying to use GNNExplainer, but I keep seeing the below error message. Still don't know if it's a bug or something i am doing wrong on my side, but there is no much documentation or examples around it.

    I am able to run smoothly the sample code at https://github.com/danielegrattarola/spektral/blob/master/examples/other/explain_node_predictions.py.

    But when applying to my dataset (where I can run successfully a GCN model), i get the below:

    dataset
    Out[66]: SADataset(n_graphs=1)
    
    dataset[0]
    Out[67]: Graph(n_nodes=1653, n_node_features=42, n_edge_features=None, n_labels=10)
    
    x_exp, a_exp = dataset[0].x, dataset[0].a
    
    x_exp.shape
    Out[69]: (1653, 42)
    
    a_exp.shape
    Out[70]: TensorShape([1653, 1653])
    
    explainer = GNNExplainer(model, preprocess=gcn_filter, verbose=True)
    n_hops was automatically inferred to be 2
    
    node_idx = 0
    
    adj_mask, feat_mask = explainer.explain_node(x=x_exp, a=a_exp, node_idx=node_idx)
    
    pred_loss: 1.097847819328308, a_size_loss: 0.5874298214912415, a_entropy_loss: 0.0692998617887497, smoothness_loss: [[0.]], x_size_loss: 2.0829315185546875, x_entropy_loss: 0.06919442862272263
    pred_loss: 1.0877137184143066, a_size_loss: 0.5852940678596497, a_entropy_loss: 0.06929884105920792, smoothness_loss: [[0.]], x_size_loss: 2.075951099395752, x_entropy_loss: 0.06918510049581528
    [... output removed]
    pred_loss: 0.6421844959259033, a_size_loss: 0.3796449303627014, a_entropy_loss: 0.05964722856879234, smoothness_loss: [[0.]], x_size_loss: 1.379091501235962, x_entropy_loss: 0.05970795825123787
    pred_loss: 0.6415124535560608, a_size_loss: 0.37782761454582214, a_entropy_loss: 0.05948375537991524, smoothness_loss: [[0.]], x_size_loss: 1.372214436531067, x_entropy_loss: 0.059564121067523956
    
    adj_mask.shape
    Out[75]: TensorShape([2349])
    
    adj_mask
    Out[76]: 
    <tf.Variable 'Variable:0' shape=(2349,) dtype=float32, numpy=
    array([ 0.8150444 ,  0.77765435, -0.9916512 , ..., -1.0242233 ,
           -0.9629407 , -0.9988212 ], dtype=float32)>
    
    
    feat_mask.shape
    Out[77]: TensorShape([1, 42])
    
    feat_mask
    Out[78]: 
    <tf.Variable 'Variable:0' shape=(1, 42) dtype=float32, numpy=
    array([[ 0.58385307, -1.3217939 , -1.0627872 , -0.00148061, -1.0020486 ,
            -0.9942789 , -0.97092587, -0.9922697 ,  0.3853194 , -0.83190703,
            -1.1318972 , -0.99104863, -1.0001428 , -0.9827519 , -0.9750702 ,
            -0.96384555, -0.890569  , -1.0193573 ,  0.4747884 , -0.91873515,
             0.7341433 , -0.97718424, -0.86869913, -0.9699511 ,  0.37709397,
            -1.0660834 , -0.92709947, -0.89111555, -1.0546191 , -1.0837208 ,
            -1.0699799 , -1.0806109 ,  0.61809593, -0.9817147 , -1.0526807 ,
            -0.95195514, -1.0162035 , -1.181156  , -1.0657567 , -1.0472083 ,
            -0.85559815, -1.0388821 ]], dtype=float32)>
    
    G = explainer.plot_subgraph(adj_mask, feat_mask, node_idx)
    Traceback (most recent call last):
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
      File "<ipython-input-74-182e1ffafc94>", line 1, in <module>
        G = explainer.plot_subgraph(adj_mask, feat_mask, node_idx)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/spektral/models/gnn_explainer.py", line 276, in plot_subgraph
        adj_mtx, top_ftrs = self._explainer_cleaning(a_mask, x_mask, node_idx, a_thresh)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/spektral/models/gnn_explainer.py", line 243, in _explainer_cleaning
        tf.multiply, self.comp_graph, selected_adj_mask
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
        return target(*args, **kwargs)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 2931, in map_values
        op(*inner_args, **inner_kwargs),
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
        return target(*args, **kwargs)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 530, in multiply
        return gen_math_ops.mul(x, y, name)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6240, in mul
        _ops.raise_from_not_ok_status(e, name)
      File ".pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 6897, in raise_from_not_ok_status
        six.raise_from(core._status_to_exception(e.code, message), None)
      File "<string>", line 3, in raise_from
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2349] vs. [2589] [Op:Mul]
    
    opened by antonioaa1979 24
  • Einsum GAT

    Einsum GAT

    Status

    As of now this is just an initial port of the Multi Head Attention code I was using. I temporary put it in its own file to make initial development easier.

    Objective

    The main objective will be to modify the GraphAttention class and multi_head_attention function (maybe merge them) as to be compatible with the current implementation.

    BTW: I might be a good to eventually break up the convolutional module into a folder with each layer in its own file, it would make contribution easier.

    opened by cgarciae 14
  • ImportError: cannot import name 'gen_sparse_ops' from 'tensorflow.python

    ImportError: cannot import name 'gen_sparse_ops' from 'tensorflow.python

    I installed Spektral both ways, via pip install and from the source.

    spektral==0.6.0

    
    from tensorflow.python import keras
    print(keras.__version__)
    2.4.0
    

    and

    import tensorflow as tf 
    print(tf.keras.__version__)
    2.4.0
    
    

    and

    print(tf.__version__)
    2.3.0
    

    I am still getting import error.

    import spektral
    
    

    ImportError: cannot import name 'gen_sparse_ops' from 'tensorflow.python' (/home/abdul/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/tensorflow/python/init.py)

    opened by Abdulk084 12
  • Why is the adjacency matrix taken in to graph convolutional layers each time? and self loops

    Why is the adjacency matrix taken in to graph convolutional layers each time? and self loops

    Hi!

    I am building a graph convolutional network which will be used in conjunction with a merged layer for a reinforcement learning task.

    I have a technical question about the convolutional layer itself which is slightly confusing to me which is: why is the adjacency matrix passed in to each conv layer and not ONLY the first one? My code is as follows:

    
    adj = nx.to_numpy_array(graph)
    
    node_features = [] #just the degree of the graph nodes
    node_degree = nx.degree(damage_graph)
    for i in dict(node_degree).values():
        node_features.append(i / len(damage_graph))
    
    node_features_final = np.array(node_features).reshape(-1, 1)
    
    
    adj_normalised = normalized_adjacency(adj)
    adj_normalised = sp_matrix_to_sp_tensor(adj_normalised)
    node_feature_shape = 1
    
    
    nodefeature_input = tf.keras.layers.Input(shape=(node_feature_shape,), name='node_features_input')
    adjacency_input = tf.keras.layers.Input(shape=(None,), name='adjacency_input', sparse=True)
    
    conv_layer_one = GCNConv(64, activation='relu')([nodefeature_input, adj_normalised])
    conv_layer_one = tf.keras.layers.Dropout(0.2)(conv_layer_one)
    conv_layer_two = GCNConv(32, activation='relu')([conv_layer_one, adj_normalised])
    conv_layer_pool = GlobalAvgPool()(conv_layer_two)
    dense_layer_graph = tf.keras.layers.Dense(128, activation='relu')(conv_layer_pool)
    
    input_action_vector = tf.keras.layers.Input(shape=(action_vector,), name='action_vec_input')
    action_vector_dense = tf.keras.layers.Dense(128, activation='relu', name='action_layer_dense')(input_action_vector)
    
    merged_layer = tf.keras.layers.Concatenate()([dense_layer_graph, action_vector_dense])
    #output_layer... etc
    model = Model([nodefeature_input, adjacency_input], [output_layer])
    
    

    and my second question is about the normalise_adjacency - it does not add self loops. Should self loops be added before or after normalising the matrix?

    thank you!

    opened by amjass12 10
  • Node-level classification in Disjoint mode with batch size > 1 or node_level=True: dimensionality of target variable y

    Node-level classification in Disjoint mode with batch size > 1 or node_level=True: dimensionality of target variable y

    I have a GNN that works when I specify the loader as:

    loader = spektral.data.loaders.DisjointLoader(dataset, batch_size=1)
    

    However, when I increase the batch size, e.g.:

    loader = spektral.data.loaders.DisjointLoader(dataset, batch_size=2)
    

    I get:

    Traceback (most recent call last):
      File "/Users/hca/PycharmProjects/Switching%20notes/ai/tests/test.py", line 30, in <module>
        model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=3)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
        tmp_logs = self.train_function(iterator)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
        result = self._call(*args, **kwds)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 888, in _call
        return self._stateless_fn(*args, **kwds)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2942, in __call__
        return graph_function._call_flat(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
        return self._build_call_outputs(self._inference_function.call(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
        outputs = execute.execute(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
        tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
    tensorflow.python.framework.errors_impl.InvalidArgumentError:  Incompatible shapes: [26,1] vs. [2,13]
    	 [[node gradient_tape/binary_crossentropy/logistic_loss/mul/BroadcastGradientArgs (defined at Users/hca/PycharmProjects/Switching%20notes/ai/tests/test.py:30) ]] [Op:__inference_train_function_3011]
    
    Function call stack:
    train_function
    

    Here it seems to me that the loader adds an additional dimension to the target variable, but this dimension is not expected in the model itself. I can specify a batch size inside model.fit(), and that works if I don't specify a batch size larger than 1 inside DisjointLoader, but I am not sure how these interact and whether that is a good idea? Is there something else that I should do when I want to run batches?

    Also, regarding node-level classification, when I specify

    loader = spektral.data.loaders.DisjointLoader(dataset, node_level=True)
    

    I get:

    Traceback (most recent call last):
      File "/Users/hca/PycharmProjects/Switching%20notes/ai/tests/test.py", line 30, in <module>
        model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=3)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
        tmp_logs = self.train_function(iterator)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
        result = self._call(*args, **kwds)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 888, in _call
        return self._stateless_fn(*args, **kwds)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2942, in __call__
        return graph_function._call_flat(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
        return self._build_call_outputs(self._inference_function.call(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
        outputs = execute.execute(
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
        tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
    tensorflow.python.framework.errors_impl.InvalidArgumentError:  TypeError: `generator` yielded an element of ((TensorSpec(shape=(13, 36), dtype=tf.float64, name=None), SparseTensorSpec(TensorShape([13, 13]), tf.int64), TensorSpec(shape=(13,), dtype=tf.int64, name=None)), TensorSpec(shape=(1, 13), dtype=tf.float64, name=None)) where an element of ((TensorSpec(shape=(None, 36), dtype=tf.float64, name=None), SparseTensorSpec(TensorShape([None, None]), tf.int64), TensorSpec(shape=(None,), dtype=tf.int64, name=None)), TensorSpec(shape=(None,), dtype=tf.float64, name=None)) was expected.
    Traceback (most recent call last):
    
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/ops/script_ops.py", line 247, in __call__
        return func(device, token, args)
    
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/ops/script_ops.py", line 135, in __call__
        ret = self._func(*args)
    
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 620, in wrapper
        return func(*args, **kwargs)
    
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 976, in generator_py_func
        raise TypeError(
    
    TypeError: `generator` yielded an element of ((TensorSpec(shape=(13, 36), dtype=tf.float64, name=None), SparseTensorSpec(TensorShape([13, 13]), tf.int64), TensorSpec(shape=(13,), dtype=tf.int64, name=None)), TensorSpec(shape=(1, 13), dtype=tf.float64, name=None)) where an element of ((TensorSpec(shape=(None, 36), dtype=tf.float64, name=None), SparseTensorSpec(TensorShape([None, None]), tf.int64), TensorSpec(shape=(None,), dtype=tf.int64, name=None)), TensorSpec(shape=(None,), dtype=tf.float64, name=None)) was expected.
    
    
    	 [[{{node EagerPyFunc}}]]
    	 [[IteratorGetNext]] [Op:__inference_train_function_3015]
    
    Function call stack:
    train_function
    

    In this case, the shape of y is (1, 13), where (None,) is expected. However, in the dataset, y has the dimensions (13,), which seems to me could have been correct, had DisjointLoader not changed that to (1, 13). What am I missing here?

    My GNN code is as follows:

    import spektral
    from dataset_class import GNN_Dataset
    from spektral.data.dataset import Dataset
    from spektral.layers import GraphSageConv
    from tensorflow.keras.layers import Dense, Input
    from tensorflow.keras.models import Model
    
    class SN_GNN(Model):
        def __init__(self):
            super().__init__()
            self.X_in = Input(shape=(13, ),
                         name='X_in')
            self.A_in = Input(shape=(None,),
                         sparse=True,
                         name='A_in')
            self.GraphSage = GraphSageConv(32)
            self.output_layer = Dense(1, activation='softmax')
    
        def call(self, inputs):
            x, a = inputs[0], inputs[1]
            x = self.GraphSage([x, a])
            out = self.output_layer(x)
            return out
    model = SN_GNN()
    dataset = GNN_Dataset('/dataset/sn/')
    loader = spektral.data.loaders.DisjointLoader(dataset, node_level=True)
    model.compile(optimizer='Adam', loss='binary_crossentropy')
    model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=3)
    
    opened by herman-nside 10
  • GINConv use example

    GINConv use example

    Hello @danielegrattarola, may you please deliver in examples the use of GINConv layer? I have a problem during passing a tensor (output of Keras "Input Layer") to this layer (model definition). Its connected with propagate method in Message Passing class:

    Model Structure:

    X_in = Input(shape=(F, ))
    A_in = Input(shape=(N, ), sparse=True)
    gc1 = GINConv(channels=300, mlp_activation='relu',)([X_in, A_in])
    

    Error relation: self.index_i = A.indices[:, 0]

    Error Type: TypeError: 'SparseTensor' object is not subscriptable.

    opened by JMcsLk 10
  • Incompatible with Keras >= 2.3 and tf.keras

    Incompatible with Keras >= 2.3 and tf.keras

    Hi,

    It seems that this is now incompatible with the latest Keras release (2.2.5) from August 22, 2019. Here are the details:

    Environment

    Python 3.7 spektral==0.0.12 tensorflow==1.14.0 keras==2.2.5

    How to reproduce

    Run a python script that imports spektral:

    import spektral
    

    Expected behaviour

    Everything runs smoothly and spektral is imported correctly.

    Observed behaviour

    We get an import error:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<path_to_virtualenv>/lib/python3.7/site-packages/spektral/__init__.py", line 5, in <module>
        from . import layers
      File "<path_to_virtualenv>/lib/python3.7/site-packages/spektral/layers/__init__.py", line 4, in <module>
        from .convolutional import *
      File "<path_to_virtualenv>/lib/python3.7/site-packages/spektral/layers/convolutional.py", line 5, in <module>
        from keras.backend import tf
    ImportError: cannot import name 'tf' from 'keras.backend' (<path_to_virtualenv>/lib/python3.7/site-packages/keras/backend/__init__.py)
    

    Suggested change

    On the short run, I think it would suffice to pin the keras version in setup.py to 2.2.4 (This fixed the issue in my project for now)

    Thanks, Tudor.

    opened by a96tudor 10
  • How to run GATConv in batch mode?

    How to run GATConv in batch mode?

    Hi, Thanks a lot for your great package.

    As it says in the doc, GATConv supports batch mode. However, the basic GAT algorithm uses the full dataset to provide scores. So I was wondering how I could provide batches for GATConv?

    opened by taherhekmatfar 9
  • Input with batch dimension for GCN

    Input with batch dimension for GCN

    Hi Daniele, thank you for this really useful package!

    I have a question about the input to my GCN. I am attempting to merge a GCN with a CNN. Construction of the model works fine, however, when specifying a batch dimension (because the merged model requires this) I am confused about the Input layer for the GCN as it is throwing an error at the concatenation layer. Models is as follows:

    def graph_cnn(state_adjacency,cnn_input_shape):
        '''create merged NN with GCN representing environment state
            and CNN representing agwnt position'''
    
        #CNN branch
        cnn_branch_input = tf.keras.layers.Input(shape=(4,4,1))
        cnn_branch_two = tf.keras.layers.Conv2D(32, (2, 2), activation='relu', padding='same')(cnn_branch_input)
        cnn_branch_three = tf.keras.layers.MaxPooling2D(1, 1)(cnn_branch_two)
        cnn_branch_four = tf.keras.layers.Conv2D(32, (2, 2), activation='relu', padding='same')(cnn_branch_three)
        cnn_branch_five = tf.keras.layers.Flatten()(cnn_branch_four)
        cnn_branch_six = tf.keras.layers.Dense(32, activation='relu')(cnn_branch_five)
    
        #GCN branch: Spektral library
        #node_features = 
        #preprocess adjacency matrix -- self loops
    
        node_feat_input = tf.keras.layers.Input(shape=(4,), name='node_feature_inp_layer')
        graph_input_adj = tf.keras.layers.Input(len(adjacency), sparse=True, name='graph_adj_layer')
        gnn_branch = GraphConv(16, 'relu')([node_feat_input, graph_input_adj])
        gnn_branch = tf.keras.layers.Dropout(0.5)(gnn_branch)
        gnn_branch_two = GraphConv(1, 'linear')([gnn_branch, graph_input_adj])
        gnn_branch_two = tf.keras.layers.Dense(32, activation='relu')(gnn_branch_two)
    
        #merged layer
        merged = tf.keras.layers.Concatenate(axis=1)([gnn_branch_two, cnn_branch_six ])
    
        #output layer: action prediciton
        output_layer = tf.keras.layers.Dense(7, activation = 'linear')(merged)
        #put model together
        merged_model = tf.keras.models.Model(inputs=[cnn_branch_input, node_feat_input, graph_input_adj],
                                            outputs=[output_layer])
        #compile mode
        merged_model.compile(optimizer='adam', 
                             weighted_metrics=['acc'],
                             loss='mse')
    
        return merged_model
    
    

    The inputs (made smaller just to establish pipeline) are: input = np.array((adj)) -- adjacency matrix (4,4) node_features = input (just made input for the purpose of running pipeline)

    and now reshaping (and where the error occurs). For CNN, the input shape will be the same size as the adjacency matrix but will be an array with different 0's and 1's to the graph (again for the purpose of establishing this i have just made it the adjacency matrix).

    cnn_input = np.expand_dims(input, 2)
    cnn_input = np.expand_dims(input, axis=0)
    shape = (1,4,4,1)
    
    gcn_input = input (adjacency matrix).
    gcn_input.shape = (1,4,4)
    node_feature.shape = (1,4,4)
    y.shape = (1,4,7)
    
    

    My confusion is the Input layer for the graph network. the current shape is shape(4,) and shape(4) (len(adjacency), as you can see. When i run this model just to see if i can get the model to start training, I receive the following error.

    model.fit([x, gcn_input, node_feature], y,
        #batch_size=4,
        shuffle=False)
    

    ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 4, 32), (None, 32)]

    the shapes of my inputs are as follows (they need to have 1 as the initial batch dimension but in real training they will be in batches of 16 or 32.

    I'm not sure how to fix the None,4,32 dimension to be the required None,32 for the concatenation layer! any help is much aprpeciated!

    thanks, and sorry for the long post, i hope the code is informative in diagnosing the problem

    opened by amjass12 9
  • Contributing more pooling and convolution layers

    Contributing more pooling and convolution layers

    Hey, I have implementations of the following few graphical neural network components:

    Convolutions:

    Pooling:

    I would love to add these to your framework however, I am a bit lost with all the requirements that my implementations need to fulfil in order to integrate seamlessly. Is there any guide for what tests, features or properties layers need to have?

    Best, Levi

    opened by LeviBorodenko 9
  • predicting on a batch of sparse tensors

    predicting on a batch of sparse tensors

    Hi @danielegrattarola ,

    I posted the other day about the adjacency matrix input - sorry for a second post, but i am now trying to predict on a batch of sparse tensors with no success - I won't post all of the code, however, i have tried feeding in the adj matrix by creating a dataset and then using the batchloader with no success.. as well as the following:

    dummy network:

    nodefeature_input = tf.keras.layers.Input(shape=(node_feature_shape,), name='node_features_input')
    adjacency_input = tf.keras.layers.Input(shape=(None,), name='adjacency_input', sparse=True)
    
    conv_layer_one = GCNConv(64, activation='relu')([nodefeature_input, adjacency_input])
    conv_layer_one = tf.keras.layers.Dropout(0.2)(conv_layer_one)
    conv_layer_two = GCNConv(32, activation='relu')([conv_layer_one, adjacency_input])
    conv_layer_pool = GlobalAvgPool()(conv_layer_two)
    dense_layer_graph = tf.keras.layers.Dense(128, activation='relu')(conv_layer_pool)
    
    dummy_gnn = Model(inputs=[nodefeature_input ,adjacency_input], outputs=[dense_layer_graph])
    

    if i create a dummy batch of data:

    adj_matrix = nx.adjacency_matrix(nx_graph) 
    x = []
    for i in range(10):
        x.append(adj_matrix) #its the same graph for the batch, but this is just to try with a batch of data
    
    practice_batch = [GCNConv.preprocess(i) for i in x]
    practice_batch = [sp_matrix_to_sp_tensor(i) for i in new_list]
    
    node_features = node_features #just node features
    y = np.zeros((10, 176, 1))
    for i in range(10):
        y[index] = node_features
        index +=1
    

    if i predict on one sample (as discussed the other day) this works without issue

    dummy_gnn([y[0], practice_batch[0])

    however, if I now predict on a batch of sparse tensors.. this fails..

    sp_batch = []
    for i in practice_batch:
        sp_batch.append(tf.sparse.expand_dims(i, 0))
    sp_batch = tf.sparse.concat(sp_inputs= sp_batch, axis=0)
    dummy_gnn([y.reshape(1, 10, 176, 1), sp_batch])
    

    this now produces an error AssertionError: Expected a of rank 2 or 3, got 4

    If i create a dummy network with only an input for the node features (y), the batch is accepted without issue, so the problem seems to be with the batch of sparse tensors (adjacency matrix input) - I have tried so many different ways of feeding these tensors in, any help is appreciated!! thank you again :)

    opened by amjass12 8
  • Name Importerror: 'EdgeConditionedConv' and 'batch_iterator'

    Name Importerror: 'EdgeConditionedConv' and 'batch_iterator'

    ImportError: cannot import name 'EdgeConditionedConv' from 'spektral.layers' (/usr/local/lib/python3.8/dist-packages/spektral/layers/init.py)

    ImportError: cannot import name 'batch_iterator' from 'spektral.utils' (/usr/local/lib/python3.8/dist-packages/spektral/utils/init.py)

    I could not solve this two Importerror. I install spektral successfully with all its packages. But on my local machine as well as google colab has not this two 'EdgeConditionedConv' and 'batch_iterator' name. Please help me to find out the solution.

    opened by Akshay1010567 0
  • Update setup.py

    Update setup.py

    Hi!

    I updated setup.py for the package to reflect specific dependencies when running on Apple Silicon (it now installs tensorflow-macos instead of tensorflow when this is the case). It solved some problems I was having when integrating spektral into a package whose dependencies I'm managing using poetry. It may help others too :)

    I tested it on an M1 MacBook Pro, installing both via pip and poetry. I'm open for suggestions regarding other implementations of the same fix, if you're interested.

    Best! Lucas

    opened by lucasmiranda42 1
  • Errors when loading included datasets

    Errors when loading included datasets

    I'm running into two issues when trying to load a dataset.

    1. with the TUDataset, the clean URL doesn't exist anymore. I changed tudataset.py --> line 55: url_clean = ("https://www.chrsmrrs.com/graphkerneldatasets") and that seems to work.

    2. This error happened with any TUDataset and with OGB When I load a dataset I get an error from dataset.py/

    dataset = TUDataset(name='PROTEINS', clean=True)

    Here's the error: ----> 2 dataset = TUDataset(name='PROTEINS', clean=False)

    File ~/miniconda3/envs/GraphDLenv/lib/python3.8/site-packages/spektral/datasets/tudataset.py:66, in TUDataset.init(self, name, clean, **kwargs) 64 self.name = name 65 self.clean = clean ---> 66 super().init(**kwargs)

    File ~/miniconda3/envs/GraphDLenv/lib/python3.8/site-packages/spektral/data/dataset.py:119, in Dataset.init(self, transforms, datainputs, **kwargs) 116 self.download() 118 # Read graphs --> 119 self.graphs = self.read(datainputs)

    ... TypeError: read() takes 1 positional argument but 2 were given

    I'm not wise enough in the ways of Python to know why it's not loading. Any help is appreciated, thanks for this awesome library!

    opened by mgandaman 1
  • loading trained Spektral GeneralGNN model

    loading trained Spektral GeneralGNN model

    Hi,

    I use the GeneralGNN model from Spektral with my own dataset. Training and evaluation works fine. When I try to load the trained model, I get different errors for different loading approaches for example, weights,SavedModel, model.to_json etc.. So, my question is how to save and load the trained GeneralGNN model, is there any way to do it?. Note: I do not make any changes to this model.

    thanks.

    opened by senemaktas 0
  • Add new node(s) to the graph with trained model.

    Add new node(s) to the graph with trained model.

    Could someone suggest, how to add new node(s) to the single graph, that already has trained model for node classification, to make prediction(classification) for this new nodes with existing model?

    opened by cappelchi 0
  • Generate score based on a single node instead of aggregating the whole graph

    Generate score based on a single node instead of aggregating the whole graph

    Hi! I currently have a model (implemented via subclassing) that has 2 ECCConv layers, then aggregates the node embeddings with a global sum and runs the result through a NN to get a score value. I also use DisjointLoader to batch graphs together.

    I'd like to try generating that score based only on one of the nodes' encodings, instead of aggreggating the whole graph.

    I was hoping you could recommend what the best way of doing this would be, since none of the implemented pooling layers seem to do this.

    Thanks!

    opened by gonzalo-menendez 5
Releases(v1.2)
  • v1.2(Jul 22, 2022)

    v1.2

    This release brings some new features and improvements

    New features

    • New convolutional layer CensNetConv
    • New batch-mode version of GINCov
    • New pooling layers: JustBalancePool and DmonPool
    • New datasets: DBLP and Flickr

    Compatibility changes

    • Python 3.6 is no longer supported officially

    API changes

    • XENetDenseConv is now called XENetConvBatch

    Bugfixes

    • Fix crash when using Disjoint2Batch and improve the performance of the layer
    • Fix minor bug that would block kwargs forwarding in SRC layers (only affects custom layers, not the ones in the library)
    • Fix preprocess method in DiffusionConv
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Apr 9, 2022)

    v1.1

    This release mostly introduces the new Select, Reduce, Connect API for pooling layers and a bunch of features, improvements, and bugfixes from previous patches.

    Most of the new features are backward compatible with two notable exceptions:

    • pooling layers must be ported to the new SRC interface. See the documentation for more details.
    • Custom MessagePassing layers that used get_i and get_j must be updated to use get_targets and get_sources. This only affects you if you have a custom implementation based on the MessagePassing class, otherwise the change will be transparent.

    This version of Spektral supports Python >=3.6 and up, and TensorFlow >=2.2.

    New features

    • New general class for pooling methods based on the Select, Reduce, Connect framework (https://arxiv.org/abs/2110.05292)
    • Node-level labels support to BatchLoader
    • New GCN model
    • GNNExplainer model
    • XENetConv convolutional layer
    • LaPool pooling layer
    • GATConv now supports weighted adjacency matrices

    Compatibility changes

    • Update minimum supported Python version to 3.6
    • Update minimum supported TensorFlow version to 2.2

    API changes

    • Remove channels argument from CrystalConv (output must be the same size as input)
    • All pooling layers are now based on SRC and have a unified interface. See docs for more details (migration from the old layers should be straightforward by changing relevant keyword arguments)
    • Rename "i" and "j" with "targets" and "sources" in the MessagePassing-based classes

    Bugfixes

    • Fix bug in GlobalAttnSumPool that caused the readout to apply attention to the full disjoint batch
    • Fixed parsing of QM9 to return the full 19-dimensional labels

    Other

    • Minor fixes in examples
    • GCN/GAT examples are now more consistent with the original papers
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Nov 30, 2020)

    The 1.0 release of Spektral is an important milestone for the library and brings many new features and improvements.

    If you have already used Spektral in your projects, the only major change that you need to be aware of is the new datasets API.

    This is a summary of the new features and changes:

    • The new Graph and Dataset containers standardize how Spektral handles data. This does not impact your models, but makes it easier to use your data in Spektral.
    • The new Loader class hides away all the complexity of creating graph batches. Whether you want to write a custom training loop or use Keras' famous model-dot-fit approach, you only need to worry about the training logic and not the data.
    • The new transforms module implements a wide variety of common operations on graphs, that you can now apply() to your datasets.
    • The new GeneralConv and GeneralGNN classes let you build models that are, well... general. Using state-of-the-art results from recent literature means that you don't need to worry about which layers or architecture to choose. The defaults will work well everywhere.
    • New datasets: QM7 and ModelNet10/40, and a new wrapper for OGB datasets.
    • Major clean-up of the library's structure and dependencies.
    • New examples and tutorials.
    Source code(tar.gz)
    Source code(zip)
Owner
Daniele Grattarola
PhD student @ Università della Svizzera italiana
Daniele Grattarola
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
Face Mask Detection on Image and Video using tensorflow and keras

Face-Mask-Detection Face Mask Detection on Image and Video using tensorflow and keras Train Neural Network on face-mask dataset using tensorflow and k

Nahid Ebrahimian 12 Nov 11, 2022
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Mask R-CNN for Object Detection and Segmentation This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bound

Matterport, Inc 22.5k Jan 4, 2023
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 19, 2021
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
Unofficial TensorFlow implementation of Protein Interface Prediction using Graph Convolutional Networks.

[TensorFlow] Protein Interface Prediction using Graph Convolutional Networks Unofficial TensorFlow implementation of Protein Interface Prediction usin

YeongHyeon Park 9 Oct 25, 2022
A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021)

GDN A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021) Abstract In this paper, we consider an inverse problem i

null 4 Sep 13, 2022