Simple reference implementation of GraphSAGE.

Overview

Reference PyTorch GraphSAGE Implementation

Author: William L. Hamilton

Basic reference PyTorch implementation of GraphSAGE. This reference implementation is not as fast as the TensorFlow version for large graphs, but the code is easier to read and it performs better (in terms of speed) on small-graph benchmarks. The code is also intended to be simpler, more extensible, and easier to work with than the TensorFlow version.

Currently, only supervised versions of GraphSAGE-mean and GraphSAGE-GCN are implemented.

Requirements

pytorch >0.2 is required.

Running examples

Execute python -m graphsage.model to run the Cora example. It assumes that CUDA is not being used, but modifying the run functions in model.py in the obvious way can change this. There is also a pubmed example (called via the run_pubmed function in model.py).

Comments
  • Can not run the sample until fix a little code in encoders.py

    Can not run the sample until fix a little code in encoders.py

    Hi William,

    I clone the code and try run_cora. At first the interpreter raise the dimension problem until I fix the code in encoders.py(line 39) from:

    neigh_feats = self.aggregator.forward(nodes, [self.adj_lists[node] for node in nodes],

    to

    neigh_feats = self.aggregator.forward(nodes, [self.adj_lists[int(node)] for node in nodes],

    Then It works fine. I think it is a type issue. Hope it will be helpful for others.

    opened by fs302 1
  • question about calling agg1 and enc1 twice

    question about calling agg1 and enc1 twice

    I wonder why do we need to call enc2 in self.features in both the encoder and aggregator? Why don't we just use the raw features as the agg2 already aggregates features from neighbors?

    opened by sakhar 0
  • A question about the learning rate.

    A question about the learning rate.

    Hello!

    I have a question about the learning rate. As is stated in the appendix of GraphSAGE, and are adopted by many other works, the learning rate is usually set to 1e-2, etc. Meanwhile, they usually normalize the input features.

    However, in your work, the learning rate is set to 0.7, which is surprisingly high. You do not normalize the input features either. When I try to reset the learning rate to a common one and use the normalized features to train, I find that the model could only converge to a extremely bad performance.

    This issue confuses me a lot. Could you help explain a bit?

    opened by johannwyh 0
  • Potential typo - enc.num_sample[s]

    Potential typo - enc.num_sample[s]

    In the model.py, whenever we are setting the number of samples for the encoders (enc1.num_samples), we should use enc1.num_sample rather than enc.num_samples

    opened by kinkunchan 0
  • Can the code for inductive learning on the PPI dataset be added to the PyTorch implementation?

    Can the code for inductive learning on the PPI dataset be added to the PyTorch implementation?

    I'm following the PyTorch version of the GraphSage code and looks like the implementation is for a transductive setting (Cora and PubMed datasets). It would be really helpful if the implementation of the PPI dataset (inductive setting - graph level learning) can be added to the PyTorch implementation. In that way, one can train on a bunch of graphs and then test on a completely new set of graphs of different dimensions just like the PPI dataset.

    opened by Sowmya-R-Krishnan 0
  • Is this for inductive learning?

    Is this for inductive learning?

    Hello. It seems that GraphSage incorporates feature information from neighbors even if the neighbor belongs to test data while building the model. I think this is not allowed in inductive learning. Would you let me know if this code is for inductive learning?

    opened by smayru 3
  • Is this pytorch version support batch-level learning?

    Is this pytorch version support batch-level learning?

    Hi Dear Author,

    Seems like the run functions are now targeting on transductive (node-level) classification. To implement the inductive (graph-level) with batched dataloader, we may need batch to store adjacencies and feature matrix, if original shape is (N, N) and (N, F) for them, now with batch they would change to (B, N, N) and (B, N, F), where B is batch-size, F is feature dim, N is node num.

    Thus, to support batch-level calculation, is there any sections in Encoder and Aggregator need change?

    opened by HarrialX 0
Owner
William L Hamilton
Assistant Professor at McGill University and Mila, working on machine learning, NLP, and network analysis.
William L Hamilton
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 1, 2023
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 4, 2023
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 1, 2023
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Junyong Lee 151 Dec 30, 2022
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware

DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this

Nervana 3.9k Dec 20, 2022
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware

DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this

Nervana 3.9k Feb 9, 2021
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
Interpretation of T cell states using reference single-cell atlases

Interpretation of T cell states using reference single-cell atlases ProjecTILs is a computational method to project scRNA-seq data into reference sing

Cancer Systems Immunology Lab 139 Jan 3, 2023
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
A embed able annotation tool for end to end cross document co-reference

CoRefi CoRefi is an emebedable web component and stand alone suite for exaughstive Within Document and Cross Document Coreference Anntoation. For a de

PythicCoder 39 Dec 12, 2022
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 4, 2022
Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

Mobile Robotics Lab. at Skoltech 31 Oct 28, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 6 Oct 20, 2021
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

null 16 Jul 11, 2022