[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"

Overview

GCA

Source code for Graph Contrastive Learning with Adaptive Augmentation (WWW 2021)

For example, to run GCA-Degree under WikiCS, execute:

python train.py --device cuda:0 --dataset WikiCS --param local:wikics.json --drop_scheme degree
You might also like...
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

Source code for Zalo AI 2021 submission
Source code for Zalo AI 2021 submission

zalo_ltr_2021 Source code for Zalo AI 2021 submission Solution: Pipeline We use the pipepline in the picture below: Our pipeline is combination of BM2

PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

Source-to-Source Debuggable Derivatives in Pure Python
Source-to-Source Debuggable Derivatives in Pure Python

Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b

Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Source codes of CenterTrack++ in 2021 ICME Workshop on Big Surveillance Data Processing and Analysis
Source codes of CenterTrack++ in 2021 ICME Workshop on Big Surveillance Data Processing and Analysis

MOT Tracked object bounding box association (CenterTrack++) New association method based on CenterTrack. Two new branches (Tracked Size and IOU) are a

Comments
  • Questions about the evaluation.

    Questions about the evaluation.

    What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.

    I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?

    Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.

    opened by nnnnnzy 0
  • Cannot reproduce the results of WikiCS dataset

    Cannot reproduce the results of WikiCS dataset

    Hi, thanks for reading this!

    When I reproduce the results of WikiCS by the provided hyperparameters and code (nothing changed), I cannot reproduce the results shown in the paper, i.e., 78%+. I only can achieve 32%+ accuracy.

    Do you have any hints on it? Is that caused by hyperparameter setting?

    Thanks!

    opened by PatPathead 6
  • PageRank centrality computed by this codes maybe meaningless

    PageRank centrality computed by this codes maybe meaningless

    drop weightis is computed according to the folllowing codes in pGRACE/functional.py :

    `def pr_drop_weights(edge_index, aggr: str = 'sink', k: int = 10): pv = compute_pr(edge_index, k=k) pv_row = pv[edge_index[0]].to(torch.float32) pv_col = pv[edge_index[1]].to(torch.float32) s_row = torch.log(pv_row) s_col = torch.log(pv_col) if aggr == 'sink': s = s_col elif aggr == 'source': s = s_row elif aggr == 'mean': s = (s_col + s_row) * 0.5 else: s = s_col weights = (s.max() - s) / (s.max() - s.mean())

    return `
    

    However, in debugging codes, I found that some elements in array pv could be 0 (while using WikiCS for training). After executed torch.log(), min value of s_row and s_col will be -inf, resulted in all of weights became 0. At that time, the weights was meaningless.

    opened by c747903646 1
  • Question about code

    Question about code

    Sorry to bother you, I am confused that why edge_weights divided by edge_weights.mean()?

    def drop_edge_weighted(edge_index, edge_weights, p: float, threshold: float = 1.): edge_weights = edge_weights / edge_weights.mean() * p edge_weights = edge_weights.where(edge_weights < threshold, torch.ones_like(edge_weights) * threshold) sel_mask = torch.bernoulli(1. - edge_weights).to(torch.bool) return edge_index[:, sel_mask]

    opened by GeniusYx 0
Owner
Big Data and Multi-modal Computing Group, CRIPAC
Big Data and Multi-modal Computing Group, Center for Research on Intelligent Perception and Computing
Big Data and Multi-modal Computing Group, CRIPAC
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 7, 2022
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

null 130 Jan 2, 2023
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
FwordCTF 2021 Infrastructure and Source code of Web/Bash challenges

FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our

Kahla 5 Nov 25, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

null 7 Oct 22, 2021