Source code for PairNorm (ICLR 2020)

Overview

PairNorm

Official pytorch source code for PairNorm paper (ICLR 2020)
This code requires pytorch_geometric>=1.3.2

usage

For SGC, we use original PairNorm. Notice norm_scale is data-dependent. One can choose it from {0.1, 1, 10, 50}.

python main.py --data cora --model SGC --nlayer 40 --missing_rate 100 --norm_mode PN --norm_scale 10

For GCN or GAT, we use PairNorm-SI or SCS.

python main.py --data cora --model DeepGCN --nlayer 10 --missing_rate 100 --norm_mode PN-SI --residual 0
python main.py --data cora --model DeepGAT --nlayer 10 --missing_rate 100 --norm_mode PN-SCS --residual 0 

update: normalization and PN

we have found that PN works bad with symmetric normalized adjacency matrix, originally the experiments align with the paper used row-normalized adjacency matrix. What's more, we also found a small bug in the old experiments with using PN for GCN and GAT. The current version PN should works good for GCN and GAT also (haven't fully tested). Please start from using PN before testing PN-SI and PN-SCS.

For GCN or GAT, now using PN to start.

python main.py --data cora --model DeepGCN --nlayer 10 --missing_rate 100 --norm_mode PN --residual 0
python main.py --data cora --model DeepGAT --nlayer 10 --missing_rate 100 --norm_mode PN --residual 0 

cite

If you use our code, please cite

@inproceedings{
zhao2020pairnorm,
title={PairNorm: Tackling Oversmoothing in {\{}GNN{\}}s},
author={Lingxiao Zhao and Leman Akoglu},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=rkecl1rtwB}
}
You might also like...
Code for ICLR 2021 Paper,
Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"

Anytime Autoregressive Model Anytime Sampling for Autoregressive Models via Ordered Autoencoding , ICLR 21 Yilun Xu, Yang Song, Sahaj Gara, Linyuan Go

Code for the paper
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Official code for the ICLR 2021 paper Neural ODE Processes
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

 	Code for
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Code for
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Comments
  • Questions on getting the results shown in the table

    Questions on getting the results shown in the table

    Hi Lingxiao,

    Thank you for the great code. I am new to this area. So I would like to apologize first, considering that my questions might be trivial.

    I wonder how to get the results shown in Table 2 of the paper. For example, for GCN-PN with 10 layers and 100% missing rate on Cora, I run the following command: python main.py --data cora --model DeepGCN --nlayer 10 --missing_rate 100 --norm_mode PN-SI --residual 0 Instead of getting the acc of 0.731 shown in the paper, I obtained the following results: Test set results: loss 1.084, acc 0.637. I also found the same issue for other items in the table.

    There might be something wrong in my experimental settings, and I would greatly appreciate it if you could help me. Thank you in advance.

    Best, Yongcheng

    opened by ycjing 5
  • What's the split of CoauthorCS?

    What's the split of CoauthorCS?

    Dear authors, I'm confused with the split of CoauthorCS, "we randomly split all nodes into train/val/test as 3%/10%/87%".Dose it mean that we sample nodes consdering the label distribution, like the setting in cora, 20 nodes per class, or just totally randomly pick up the nodes in the whole set?I'm looking forward to your reply.It will help me a lot!

    opened by StrayLu 0
  • The difference between Pairnorm and Batchnorm.

    The difference between Pairnorm and Batchnorm.

    Hi, Really nice work! After reading your paper, I have a question about the difference between Pairnorm and Batchnorm, especially under the inductive setting. Could you please provide some insights?

    Thank you!

    opened by guyguygang 1
  • What's the insight for scale-and-center mode?

    What's the insight for scale-and-center mode?

    I found there is another mode, scale-and-center, which hasn't been mentioned in the paper.

    It looks a little bit weird to me that SCS mode first scales and then centers the representations, where the mean is computed based on the statistics before scaling. Could you explain the insight behind that?

    opened by KiddoZhu 1
Owner
null
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

null 20 Jul 29, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 5, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendada, G. Salha and T. Bontempelli

Carousel Personalization in Music Streaming Apps with Contextual Bandits - RecSys 2020 This repository provides Python code and data to reproduce expe

Deezer 48 Jan 2, 2023
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

null 58 Dec 21, 2022
Source code for CVPR 2020 paper "Learning to Forget for Meta-Learning"

L2F - Learning to Forget for Meta-Learning Sungyong Baik, Seokil Hong, Kyoung Mu Lee Source code for CVPR 2020 paper "Learning to Forget for Meta-Lear

Sungyong Baik 29 May 22, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022