GCA
Source code for Graph Contrastive Learning with Adaptive Augmentation (WWW 2021)
For example, to run GCA-Degree under WikiCS, execute:
python train.py --device cuda:0 --dataset WikiCS --param local:wikics.json --drop_scheme degree
Source code for Graph Contrastive Learning with Adaptive Augmentation (WWW 2021)
For example, to run GCA-Degree under WikiCS, execute:
python train.py --device cuda:0 --dataset WikiCS --param local:wikics.json --drop_scheme degree
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri
RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1
CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L
zalo_ltr_2021 Source code for Zalo AI 2021 submission Solution: Pipeline We use the pipepline in the picture below: Our pipeline is combination of BM2
简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人
Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b
CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".
Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio
MOT Tracked object bounding box association (CenterTrack++) New association method based on CenterTrack. Two new branches (Tracked Size and IOU) are a
What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.
I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?
Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.
Hi, thanks for reading this!
When I reproduce the results of WikiCS by the provided hyperparameters and code (nothing changed), I cannot reproduce the results shown in the paper, i.e., 78%+. I only can achieve 32%+ accuracy.
Do you have any hints on it? Is that caused by hyperparameter setting?
Thanks!
drop weightis is computed according to the folllowing codes in pGRACE/functional.py :
`def pr_drop_weights(edge_index, aggr: str = 'sink', k: int = 10): pv = compute_pr(edge_index, k=k) pv_row = pv[edge_index[0]].to(torch.float32) pv_col = pv[edge_index[1]].to(torch.float32) s_row = torch.log(pv_row) s_col = torch.log(pv_col) if aggr == 'sink': s = s_col elif aggr == 'source': s = s_row elif aggr == 'mean': s = (s_col + s_row) * 0.5 else: s = s_col weights = (s.max() - s) / (s.max() - s.mean())
return `
However, in debugging codes, I found that some elements in array pv
could be 0 (while using WikiCS for training). After executed torch.log(), min value of s_row
and s_col
will be -inf
, resulted in all of weights
became 0. At that time, the weights was meaningless.
Sorry to bother you, I am confused that why edge_weights divided by edge_weights.mean()?
def drop_edge_weighted(edge_index, edge_weights, p: float, threshold: float = 1.):
edge_weights = edge_weights / edge_weights.mean() * p
edge_weights = edge_weights.where(edge_weights < threshold, torch.ones_like(edge_weights) * threshold)
sel_mask = torch.bernoulli(1. - edge_weights).to(torch.bool)
return edge_index[:, sel_mask]
SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.
Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a
LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun
MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2
ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co
TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.
Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati
G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download
FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our
AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S