Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Overview

Self-supervised Graph-level Representation Learning with Local and Global Structure

Introduction

This project is an implementation of ``Self-supervised Graph-level Representation Learning with Local and Global Structure'' in PyTorch, which is accepted as Short Talk by ICML 2021. We provide the pre-training and fine-tuning codes and also the pre-trained model on chemistry domain in this repository, and a more complete code version including the biology domain will be announced on the TorchDrug platform developed by MilaGraph group. Also, we would like to appreciate the excellent work of Pretrain-GNNs which lays a solid foundation for our work.

More details of this work can be found in our paper: [Paper (arXiv)].

Prerequisites

We develop this project with Python3.6 and following Python packages:

Pytorch                   1.1.0
torch-cluster             1.4.5                    
torch-geometric           1.0.3                    
torch-scatter             1.4.0                    
torch-sparse              0.4.4                    
torch-spline-conv         1.0.6 
rdkit                     2019.03.1

P.S. In our project, these packages can be successfully installed and work together under CUDA/9.0 and cuDNN/7.0.5.

Dataset Preparation

In the root direction of this project, create a folder for storing datasets:

mkdir dataset

The pre-training and fine-tuning datasets on chemistry domain can be downloaded from the project page of Pretrain-GNNs.

Pre-training

To pre-train with the proposed GraphLoG method, simply run:

python pretrain_graphlog.py --output_model_file $pre-trained_model$

Fine-tuning

To fine-tune on a downstream dataset, simply run (five independent runs will perform):

python finetune.py --input_model_file $pre-trained_model$ \
                   --dataset $downstream_dataset$

Pretrained Model

We provide the GIN model pre-trained by GraphLoG at ./models/.

Citation

If this work helps your research, you can kindly cite the following paper (will be updated when the ICML paper is published).

@article{xu2021self-supervised,
  title={Self-supervised Graph-level Representation Learning with Local and Global Structure},
  author={Xu, Minghao and Wang, Hang and Ni, Bingbing and Guo, Hongyu and Tang, Jian},
  journal={arXiv preprint arXiv:2106.04113},
  year={2021}
}
You might also like...
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

Code release for
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

[ICML 2021]
[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated [talk] [poster] [appendix] Yuning You, Tianlong C

The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

The official implementation of the paper,
The official implementation of the paper, "SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning"

SubTab: Author: Talip Ucar ([email protected]) The official implementation of the paper, SubTab: Subsetting Features of Tabular Data for Self-Supervis

This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'
Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'

RTK-PAD This is an official pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model', which is accepted by IEEE T

Comments
  • Proto Init

    Proto Init

    Hi, thanks for such a promising work on graph-level task. Here i just confront some problems. According your paper, In E-step , you adopt a categorical distribution to init your prototypes, and this is consistent to the code. While i notice that you also mention using k-means to init this in Section 3.2 and 5.1, the code publised here seems you did not include that? Or, what am i missing? thanks~

    opened by YcZ76 0
  • Advising evaluation setup for chem datasets

    Advising evaluation setup for chem datasets

    Hi,

    I found that you are using the last accuracy as the reported accuracy in your code. Why not use the best-validated test accuracy? Are you also reporting the last accuracy in your paper?

    Thank you

    opened by ZhaoningYu1996 0
  • Reimplementing results for the BIO datasets.

    Reimplementing results for the BIO datasets.

    Thanks for the great work!

    I have two questions about re-implementing the results for the BIO datasets since the code is not included in this repo.

    • In your paper, does the fine-tuning for BIO use the same hyperparameters as that for CHEM?
    • In the original pretrain-gnn code, the code for BIO reports two test results -- test_acc_easy and test_acc_hard. May I know which one do you report in the paper, or do you use the average of both?

    Thanks for your time!

    opened by acharkq 0
Owner
MilaGraph
Research group led by Prof. Jian Tang at Mila-Quebec AI Institute (https://mila.quebec/) focusing on graph representation learning and graph neural networks.
MilaGraph
Decentralized Reinforcment Learning: Global Decision-Making via Local Economic Transactions (ICML 2020)

Decentralized Reinforcement Learning This is the code complementing the paper Decentralized Reinforcment Learning: Global Decision-Making via Local Ec

null 40 Oct 30, 2022
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

Hsiang Gao 2 Oct 31, 2022
Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"

Focal Transformer This is the official implementation of our Focal Transformer -- "Focal Self-attention for Local-Global Interactions in Vision Transf

Microsoft 486 Dec 20, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning DouZero is a reinforcement learning framework for DouDizhu (斗地主), t

Kwai Inc. 3.1k Jan 4, 2023