PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE).

Overview

GRACE

The official PyTorch implementation of deep GRAph Contrastive rEpresentation learning (GRACE).

For a thorough resource collection of self-supervised learning methods on graphs, you may refer to this awesome list.

Dependencies

  • torch 1.4.0
  • torch-geometric 1.5.0
  • sklearn 0.21.3
  • numpy 1.18.1
  • pyyaml 5.3.1

Install all dependencies using

pip install -r requirements.txt

If you encounter some problems during installing torch-geometric, please refer to the installation manual on its official website.

Usage

Train and evaluate the model by executing

python train.py --dataset Cora

The --dataset argument should be one of [ Cora, CiteSeer, PubMed, DBLP ].

Citation

If you use our code in your own research, please cite the following article:

@inproceedings{Zhu:2020vf,
  author = {Zhu, Yanqiao and Xu, Yichen and Yu, Feng and Liu, Qiang and Wu, Shu and Wang, Liang},
  title = {{Deep Graph Contrastive Representation Learning}},
  booktitle = {ICML Workshop on Graph Representation Learning and Beyond},
  year = {2020},
  url = {http://arxiv.org/abs/2006.04131}
}
Comments
  • Scaling to larger datasets

    Scaling to larger datasets

    Thanks for your awesome work! I am trying to apply GRACE to larger datasets, but according to your code, the training process is conducted in a full-batch way which hinders the scalability. In your paper, it is mentioned that EIGHT GPUs are used, could you please kindly share the way you implement it? As far as I know, PyG only supports multi-graph distributed computation. Also, it deserves many thanks if you could provide me with other suggestions! Looking forward to your reply!!

    opened by sunisfighting 3
  • Question about dataset split

    Question about dataset split

    Hi, Is there any reason that you use random split instead of the public split? I can't get similar performance in Cora public split compared to in random split based on the same hyperparameters setting (80.5 vs 83.3 in accuracy ).

    opened by neolifer 1
  • Your requirements are not fine

    Your requirements are not fine

    I think you should mention all the required versions to compile your codes. There is no version information of these below, as a result I can not compile your codes. torch-scatter torch-sparse torch-cluster torch-spline

    You can see from the screenshot, torch-sparce is already installed but it still says there is an import error. I think this is because of the version issue. kindly state the correct versions. I want to use your work in my literature and paper.

    image

    opened by mhadnanali 1
  • How to process the dataset?

    How to process the dataset?

    Hi, author. I have some confusion about how to process the dataset. In your project, I could not find and understand this method. Can you push your code about processing dataset please? I will appreciate your generousness.

    opened by MrsYaoH 1
  • Unfair comparison with other models.

    Unfair comparison with other models.

    In eval.py, train/test split follows a 90% / 10% mannner instead of that of public split. While the baseline models(e.g. DGI) use public split for evaluation.

    opened by hengruizhang98 1
  • question about hidden dim

    question about hidden dim

    In your implementation setting, such as in Cora, hidden dim = 128, but in your code, you double it to 2 * out_channels, is this reasonable? Apparently the current dimension is 256 in your code.

    class Encoder(torch.nn.Module):
        def __init__(self, in_channels: int, out_channels: int, activation,
                     base_model=GCNConv, k: int = 2):
            super(Encoder, self).__init__()
            self.base_model = base_model
    
            assert k >= 2
            self.k = k
            self.conv = [base_model(in_channels, 2 * out_channels)]
            for _ in range(1, k-1):
                self.conv.append(base_model(2 * out_channels, 2 * out_channels))
            self.conv.append(base_model(2 * out_channels, out_channels))
            self.conv = nn.ModuleList(self.conv)
    
            self.activation = activation
    
        def forward(self, x: torch.Tensor, edge_index: torch.Tensor):
            for i in range(self.k):
                x = self.activation(self.conv[i](x, edge_index))
            return x
    
    
    opened by downeykking 0
  • Question about PubMed performance

    Question about PubMed performance

    Hi,

    In your paper, GRACE achieves 86.7% in Pubmed, and DGI achieves 86% in Pubmed. However, in the DGI paper, the performance of DGI only achieves 76.8% in Pubmed. I also notice you follow the DGI setting in your experiments. How did you make an improvement of almost 10% on DGI?

    opened by ltz0120 1
  • How to use graph augmentation when learning on large graphs

    How to use graph augmentation when learning on large graphs

    Hello! Thanks for the codes! I have a question on how to use the augmentation including RE and MF mentioned in the paper on a large graph. Now, I randomly select a minibatch of nodes from the large graph, and then a subgraph can be generated. I sample 15,10,5 neighbours at the first-, second-, and third-hop. However, I am not sure how to do the graph augmentation to obtain two views. Should I generate two views of the raw large graph first and then sample subgraphs, or should I produce two views for the subgraphs at the first-hop? Could you please upload codes on large graph training? Thank you very much!!!

    opened by o0vv0o 0
  • about citeseer result

    about citeseer result

    Hello, thank you very much for providing the source code, but I am having some problems reproducing Citeseer's results, no matter how many times I run it, the best F1 result is only about 68%, and I can't reach the results in the paper.

    opened by LBShinChan 0
Owner
Big Data and Multi-modal Computing Group, CRIPAC
Big Data and Multi-modal Computing Group, Center for Research on Intelligent Perception and Computing
Big Data and Multi-modal Computing Group, CRIPAC
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
Pytorch Implementation of "Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation"

CRL_EGPG Pytorch Implementation of Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation We use contrastive loss implemented b

YHR 25 Nov 14, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21

ANEMONE A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21 Dependencies python==3.6.1 dgl==

Graph Analysis & Deep Learning Laboratory, GRAND 30 Dec 14, 2022
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 8, 2023
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

Hsiang Gao 2 Oct 31, 2022
Saeed Lotfi 28 Dec 12, 2022
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 8, 2023
PyGCL: A PyTorch Library for Graph Contrastive Learning

PyGCL is a PyTorch-based open-source Graph Contrastive Learning (GCL) library, which features modularized GCL components from published papers, standa

PyGCL 588 Dec 31, 2022
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

null 43 Dec 14, 2022