Recommendation algorithms for large graphs

Overview

pygrank

Fast recommendation algorithms for large graphs based on link analysis.

License: Apache Software License
Author: Emmanouil (Manios) Krasanakis
Dependencies: networkx, numpy, scipy, sklearn, wget (required) tensorflow, torch (optional)

build codecov Downloads

Roadmap for 0.2.X

The following roadmap overviews short-term development goals and will be updated appropriately.

✔️ Reach a stable architecture with comprehensive development management (achieved as of 0.2.3, no longer backwards compatible with 0.1.17, most important change to_scipy >> preprocessor.)
✔️ Graph neural network support with dropout, renormalization and tensorflow backend (achieved as of 0.2.3)
✔️ Pytorch backend (achieved as of 0.2.4)
Pytorch gnns
100% code coverage
100% documentation completeness
Automatic download for all related publication datasets
Updated reference docs and automated citation discovery for algorithms
Enable Arnoldi and Lanczos optimizations in non-numpy backends
Transparent handling of float and double precisions (as of 0.2.4 everything is in float32, but this will change)

🛠️ Installation

pygrank is meant to work with Python 3.9 or later. It can be installed with pip per:

pip install pygrank

To automatically use the machine learning backends (e.g. to integrate the package in machine learning projects) tensorflow and pytorch, manually change the automatically created configuration file whose path is displayed in the error console. If you want others to run your code that depends on pygrank with specific backends, add the following recipe at your code's entry point to override other configurations:

import pygrank as pg
pg.load_backend(`pytorch`)

Quickstart

As a quick start, let us construct a networkx graph G and a set of nodes seeds.

>>> import networkx as nx
>>> graph = nx.Graph()
>>> graph.add_edge("A", "B")
>>> graph.add_edge("B", "C")
>>> graph.add_edge("C", "D")
>>> graph.add_edge("D", "E")
>>> graph.add_edge("A", "C")
>>> graph.add_edge("C", "E")
>>> graph.add_edge("B", "E")
>>> seeds = {"A", "B"}

We now run a personalized PageRank graph filter to score the structural relatedness of graph nodes to the ones of the given set. We start by importing the library,

>>> import pygrank as pg

For instructional purposes, we select the PageRank filter. This and more filters can be found in the module pygrank.algorithms.filters, but for ease-of-use can be accessed from the top-level import. We also set the default values of some parameters: the graph diffusion rate alpha required by the algorithm, a numerical tolerance tol at the convergence point and a graph preprocessing strategy "auto" normalization of the garph adjacency matrix to determine between column-based and symmetric normalization depending on whether the graph is undirected (as in this example) or not respectively.

>>> ranker = pg.PageRank(alpha=0.85, tol=1.E-6, normalization="auto")
>>> ranks = ranker(graph, {v: 1 for v in seeds})

Node ranking output is always organized into graph signals which can be used like dictionaries. For example, we can print the scores of some nodes per:

>>> print(ranks["B"], ranks["D"], ranks["E"])
0.25865456609095644 0.12484722044728883 0.17079023174039495

We alter this outcome so that it outputs node order, where higher node scores are assigned lower order. This is achieved by wrapping a postprocessor around the algorithm. There are various postprocessors, including ones to make scores fairness-aware. Again, postprocessors can be found in pygrank.algorithms.postprocess, but for shortcut purposes can be used from the top-level package import.

>>> ordinals = pg.Ordinals(ranker).rank(graph, {v: 1 for v in seeds})
>>> print(ordinals["B"], ordinals["D"], ordinals["E"])
1 5 4

How much time did it take for the base ranker to converge? (Depends on backend and device characteristics.)

>>> print(ranker.convergence)
19 iterations (0.001831000000009908 sec)

Since only the node order is important, we can use a different way to specify convergence:

>>> convergence = pg.RankOrderConvergenceManager(pagerank_alpha=0.85, confidence=0.98) 
>>> early_stop_ranker = pg.PageRank(alpha=0.85, convergence=convergence)
>>> ordinals = pg.Ordinals(early_stop_ranker).rank(graph, {v: 1 for v in seeds})
>>> print(early_stop_ranker.convergence)
2 iterations (0.0006313000000091051 sec)
>>> print(ordinals["B"], ordinals["D"], ordinals["E"])
1 5 4

Close to the previous results at a fraction of the time!! Note that convergence time measurements do not take into account the time needed to preprocess graphs.

🧠 Overview

Analyzing graph edges (links) between nodes can help rank/score graph nodes based on their structural proximity to structural or attribute-based communities of nodes. With the introduction of graph signal processing and decoupled graph neural networks the importance of node ranking has drastically increased, as its ability to perform inductive learning by quickly spreading node information through edges has been theoretically and experimentally corroborated. For example, it can be used to make predictions based on few known node attributes or base predictions outputted by low-quality feature-based machine learning models.

pygrank is a collection of node ranking algorithms and practices that support real-world conditions, such as large graphs and heterogeneous preprocessing and postprocessing requiremenets. Thus, it provides ready-to-use tools that simplify deployment of theoretical advancements and testing of new algorithms.

Some of the library's advantages are:

  1. Compatibility with networkx, tensorflow and pytorch.
  2. Datacentric interfaces that do not require transformations to identifiers.
  3. Large graph support with sparse representations and fast algorithms.
  4. Seamless pipelines, from graph preprocessing up to benchmarking and evaluation.
  5. Modular combination of components.

🔗 Material

Tutorials & Documentation

Quick links
Measures
Graph Filters
Postprocessors
Tuners
Downloadable Datasets

🔥 Features

  • Graph filters
  • Community detection
  • Graph normalization
  • Convergence criteria
  • Postprocessing (e.g. fairness awareness)
  • Evaluation measures
  • Benchmarks
  • Autotuning

👍 Contributing

Feel free to contribute in any way, for example through the issue tracker or by participating in discussions. Please check out the contribution guidelines to bring modifications to the code base. If so, make sure to follow the pull checklist described in the guidelines.

📓 Citation

If pygrank has been useful in your research and you would like to cite it in a scientific publication, please refer to the following paper:

TBD

To publish research that uses provided methods, please cite the appropriate publications.

Comments
  • Performant use of sparse matrices?

    Performant use of sparse matrices?

    I'm trying to use pygrank with larger graphs: 100k-1m nodes, hundreds of millions of edges. My graphs are in sparse matrix format. So far I've just converted to networkx and used those:

    g = nx.from_scipy_sparse_array(A, create_using=nx.DiGraph)
    
    signal_dict = {i: 1.0 for i in seeds}
    
    signal = pg.to_signal(g, signal_dict)
    
    # normalize signal
    signal.np /= signal.np.sum()
    
    result = algorithm(signal).np
    

    Is there a more performant option available?

    opened by deklanw 5
  • Tune on non-seeds?

    Tune on non-seeds?

    Is it possible to run the tuners with non-seed nodes? For example if I have a seed_set and a target_set can I run the tuner diffusions with the signal from the former but optimize for metrics defined with respect to the latter? In this case I have a desired ranking of the nodes in the target_set.

    opened by deklanw 2
  • Seamless verbosity

    Seamless verbosity

    Automatically display verbose progress (e.g. for dataset downloading) that disappears once tasks are complete (e.g. to resume normal benchmarking prints).

    feature 
    opened by maniospas 2
  • Automatically switching between backend interpretations

    Automatically switching between backend interpretations

    Currently, graph signal .np attributes need to be manually converted between backends. Switch to a @property getter interface that automatically performs needed conversions to remove the burden of checking for backend compliance from developers.

    feature 
    opened by maniospas 1
  • Numeric graph signal operations

    Numeric graph signal operations

    Currently, there needs to be clear distinction between graph signal objects and extraction of their .np fields. Reframe code so that, when signal operations are employed, their respective .np fields are used in their place. This can help write comphrehensible high-level code.

    feature 
    opened by maniospas 1
  • Torch GNN support

    Torch GNN support

    Current helper methods for GNNs are centered on tensorflow and keras. Create backend operations to abstract them so that they can also be implemented through torch. This needs to add a new test to make sure everything is working.

    Related tests: tests.test_gnn.test_appnp

    feature 
    opened by maniospas 1
  • Citation discovery for postprocessors

    Citation discovery for postprocessors

    Recommend how to cite graph filters and tuners through their cite() method.

    Related tests: test_autorefs.test_autorefs, test_autorefs.test_postprocessor_citations

    feature 
    opened by maniospas 1
  • Citation discovery for graph filters

    Citation discovery for graph filters

    Recommend how to cite graph filters and tuners through their cite() method.

    Related tests: test_autorefs.test_autorefs, test_autorefs.test_filter_citations

    feature 
    opened by maniospas 1
  • Automatic citation discovery

    Automatic citation discovery

    Since node ranking algorithms can comprise multiple components, some of which are implicitly determined (e.g. through default instantiation or specific argument parameters), create methods that can summarize used components and provide citations.

    Usefulness: This can help streamline citation practices.

    Related tests: None

    feature 
    opened by maniospas 1
  • optimization_dict not improving performance

    optimization_dict not improving performance

    The optimization_dict argument to the ClosedFormGraphFilter class does not seem to produce as an improvement in runnng time. This could indicate either a bug or bottlenecks in other parts of the pipeline, e.g. in graph signal instantiation.

    Version: run with version 2.3 adjusted to run experiments 50 times when measuring time

    Demonstration:

    >>> import pygrank as pg
    >>> optimization_dict = dict()
    >>> pg.benchmark_print(pg.benchmark({"HK": pg.HeatKernel(optimization_dict=optimization_dict)}, pg.load_datasets_all_communities(["bigraph"]), metric="time"))
                   	 HK 
    bigraph0       	 3.06
    bigraph1       	 3.36
    >>> pg.benchmark_print(pg.benchmark({"HK": pg.HeatKernel()}, pg.load_datasets_all_communities(["bigraph"]), metric="time"))
                   	 HK 
    bigraph0       	 2.98
    bigraph1       	 2.96
    

    Related tests: None

    bug fixed 
    opened by maniospas 1
  • Implementing PageRank as a GenericGraphFilter does not seem to work

    Implementing PageRank as a GenericGraphFilter does not seem to work

    In the following code, the two algorithm should be close to equivalent, yet there is a signficant Mabs error between them.

    >>> import pygrank as pg
    >>> graph = next(pg.load_datasets_graph(["graph9"]))
    >>> ranks1 = pg.Normalize(pg.PageRank(0.85, tol=1.E-12, max_iters=1000)).rank(graph, {"A": 1})
    >>> ranks2 = pg.Normalize(pg.GenericGraphFilter([0.85**i for i in range(20)], tol=1.E-12)).rank(graph, {"A": 1})
    >>> print(pg.Mabs(ranks1)(ranks2))
    0.025585056372903574
    

    Related tests: tests.test_filters.test_custom_runs

    bug invalid 
    opened by maniospas 1
  • Potential issue in the GNN demonstrator example with tensorflow backend

    Potential issue in the GNN demonstrator example with tensorflow backend

    During the review process of the library's paper, a reviewer pointed out that the following error occurs in their local system with TensorFlow 3.9.2 and Python 3.10.6.

    TypeError: Sequential.call() got multiple values for argument 'training'
    

    This occurs when running the code of the APPNP example. The issue lies fully with the example and not with any additional library functionality - it will not motivate a hotfix.

    Investigate whether this issue is unique to the version of TensorFlow or whether the latter has yet again updated something that will break the example in all future versions. At the very least, this error should not occur in github actions.

    If this is not the case, investigate whether this issue is platform-dependent.

    bug 
    opened by maniospas 0
  • Possible sparse_dot_mkl integration?

    Possible sparse_dot_mkl integration?

    I saw that you have your own sparse matrix library called matvec which parallelizes sparse-dense multiplication. There is an existing Python library called sparsedot which does the same but with scipy csr/csc matrices https://github.com/flatironinstitute/sparse_dot.

    I benchmarked the two with a matrix of size

    <6357204x6357204 sparse matrix of type '<class 'numpy.float32'>'
    	with 3614017927 stored elements in Compressed Sparse Column format>
    

    With the 32 core/64 thread server CPU I'm testing on the times for 10 matrix-vec multiplications on the right and left are:

    matvec
    right-vec  25.15
    left-vec  19.47
    
    sparse_dot csc
    right-vec  40.17
    left-vec  14.91
    
    sparse_dot csr
    right-vec  10.38
    left-vec  28.53
    

    The times look competitive. I'm not sure if matvec has some other advantages I'm not considering here, but that sparsedot works with the existing scipy types would be a huge benefit (for my usecase, at least). Sparsedot does require installing the mkl library and for giant matrices as above requires the environment variable MKL_INTERFACE_LAYER=ILP64.

    opened by deklanw 4
  • Convergence management tracking

    Convergence management tracking

    IImplement a high-level way of summarizing convergence analysis, for example to help measure running time and iterations when algorithms are wrapped by postprocessors (including iterative schemes). For example, a list of all convergence manager run outcomes could be obtained. Perhaps this could be achieved with some combination of dependent algorithm discovery and keeping convergence manager history on restart.

    Related tests: tests.test_filters.test_convergence_string_conversion

    feature 
    opened by maniospas 0
  • Check FairWalk correctness

    Check FairWalk correctness

    Fairwalk does not achieve the same level of fairness (as high a pRule) as other fairness-aware heuristics during tests. This could arise from erroneous implementation. If the implementation is found to be correct, separate its tests from other heuristics to account for the lower expected improvement.

    Related tests: tests.test_fairness.test_fair_heuristics

    invalid 
    opened by maniospas 0
Releases(0.2.10)
Owner
Multimedia Knowledge and Social Analytics Lab
MKLab is part of the Information Technologies Institute.
Multimedia Knowledge and Social Analytics Lab
Compositional and Parameter-Efficient Representations for Large Knowledge Graphs

NodePiece - Compositional and Parameter-Efficient Representations for Large Knowledge Graphs NodePiece is a "tokenizer" for reducing entity vocabulary

Michael Galkin 107 Jan 4, 2023
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 2, 2023
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 3, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Intel Labs 72 Dec 16, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 7, 2023
StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.6k Jan 5, 2023
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 3, 2023
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

null 1.8k Dec 28, 2022
QA-GNN: Question Answering using Language Models and Knowledge Graphs

QA-GNN: Question Answering using Language Models and Knowledge Graphs This repo provides the source code & data of our paper: QA-GNN: Reasoning with L

Michihiro Yasunaga 434 Jan 4, 2023
Scalable Graph Neural Networks for Heterogeneous Graphs

Neighbor Averaging over Relation Subgraphs (NARS) NARS is an algorithm for node classification on heterogeneous graphs, based on scalable neighbor ave

Facebook Research 67 Dec 3, 2022
Source codes for "Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs"

Structure-Aware-BART This repo contains codes for the following paper: Jiaao Chen, Diyi Yang:Structure-Aware Abstractive Conversation Summarization vi

GT-SALT 56 Dec 8, 2022
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

AI2 152 Dec 27, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

UCL Natural Language Processing 71 Dec 29, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022