PyTorch implementation for Graph Contrastive Learning with Augmentations

Overview

Graph Contrastive Learning with Augmentations

PyTorch implementation for Graph Contrastive Learning with Augmentations [poster] [appendix]

Yuning You*, Tianlong Chen*, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen

In NeurIPS 2020.

Overview

In this repository, we develop contrastive learning with augmentations for GNN pre-training (GraphCL, Figure 1) to address the challenge of data heterogeneity in graphs. Systematic study is performed as shown in Figure 2, to assess the performance of contrasting different augmentations on various types of datasets.

Experiments

Citation

If you use this code for you research, please cite our paper.

@inproceedings{You2020GraphCL,
 author = {You, Yuning and Chen, Tianlong and Sui, Yongduo and Chen, Ting and Wang, Zhangyang and Shen, Yang},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {5812--5823},
 publisher = {Curran Associates, Inc.},
 title = {Graph Contrastive Learning with Augmentations},
 url = {https://proceedings.neurips.cc/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.pdf},
 volume = {33},
 year = {2020}
}
Comments
  • Code

    Code

    Hi: In GraphCL/semisupervised_TU/pre-training/ I take NCI1 as an example as you stated, run CUDA_VISIBLE_DEVICES=$GPU_ID python main.py --dataset NCI1 --aug1 random2 --aug2 random2 --lr 0.001 --suffix 0. There is an error that "No such file or directory: data/NCI1/NCI1/processed/data_deg+odeg100.pt". How could I get the file data_deg+odeg100.pt? Should I run another code before getting .pt file?

    opened by wangzeyu135798 12
  • Questions about the transferlearning experiments

    Questions about the transferlearning experiments

    Hi @yyou1996,

    I tired pretraining on transferlearning experiments yet found: 1) on Bio benchmark, the loss and acc are always 0.0 and nan; 2) On chem dataset, the training process stopped at the 2nd epoch. Could you pls check the code? Btw, the finetuning command worked well. Thanks a lot!

    image

    image

    Ps: My environment is cuda=10.0 and pyG has the same version just except it is for cu100.

    opened by ha-lins 11
  • A bug in aug_random_edge

    A bug in aug_random_edge

    Hi, Shen. Thanks for your efforts in this program. I noticed there might have a logical bug in the aug_random_edge function.

    “for i in index_list: single_index_list.append(i) index_list.remove((i[1], i[0]))”

    Removing items in a list while iterating that list may cause problems, as the list changed but the current index is not changed.

    This blog illustrated well. https://thispointer.com/python-remove-elements-from-a-list-while-iterating/

    Thanks for your help.

    opened by skydetailme 10
  • Results about transferLearning_PPI experiments

    Results about transferLearning_PPI experiments

    Hi @yyou1996,

    Thanks for the amazing work and released code. They are really interesting.

    However, I find that it's hard to reproduce the results of Table. 5 on PPI dataset, where GraphCL gets 67.88 ± 0.85 ROC-AUC. I follow the instruction in README to run the script cd ./bio; ./finetune.sh in two versions of PyG.

    The results and details of result.log of my reproducing experiments are as following:

    ROC-AUC = 63.95 ± 1.05

    In this experiment, I had torch_geometric == 1.0.3, torch == 1.0.1 and all the codes are as original.

    0 0.6488228587091137 0.6437029455712953
    1 0.6453625892746733 0.637906301310777
    2 0.6653588896536515 0.6491496412192561
    3 0.6712478235166237 0.6609652991518822
    4 0.6551347790238357 0.648166151211857
    5 0.6496970788807328 0.6285455615711776
    6 0.6377575006466477 0.6259403911657462
    7 0.6356370096455746 0.629560479239631
    8 0.6442826009871101 0.6378061247726416
    9 0.6421840662455275 0.6328729467542645
    

    ROC-AUC = 63.43 ± 0.86

    In this experiment, I had torch_geometric == 1.6.3, torch == 1.7.1, and I changed the codes a little bit following #14.

    0 0.6442808014337911 0.6321503395158243
    1 0.644493673207877 0.6267153949242326
    2 0.6453095524067848 0.6349091275301133
    3 0.6438850211437335 0.6420234098635493
    4 0.6411963663200242 0.6328391671224951
    5 0.66046061449877 0.6512654813973853
    6 0.6473159630814896 0.6400854279154624
    7 0.6308149754938717 0.6191641371576463
    8 0.6486396922222195 0.6367294139925784
    9 0.6357155854879776 0.6267330172938124
    

    Results of my experiments are calculated according to the rightest column of result.log, which are results of test_acc_hard_list following Hu et.al. ICLR 2020 [1]. Just to make sure if I have missed some important details to reproduse the results presented in the literature. Looking forward to your reply, thanks!

    [1] Strategies for Pre-training Graph Neural Networks. Weihua Hu, Bowen Liu et.al. ICLR 2020. arxiv.

    opened by hyp1231 7
  • Questions about the Unsupervised_TU experiments

    Questions about the Unsupervised_TU experiments

    Hi @yyou1996,

    Thanks for your efforts in this project. I have some questions as follows:

    1. I notice that the training & evaluation process of GraphCL is slightly different from InfoGraph, which evaluates per epoch. GraphCL doesn't save the model during training and evaluates per 10 epochs. I wonder if I could evaluate per epoch and choose the corresponding test acc. to the highest val acc. as the final test result. Or I can choose the highest test acc. directly, though testing many times during training could be weird.

    2. I tried evaluating per epoch on IMDB-B and the training process was a bit weird to me. The validate acc. didn't improve through more training epochs, which probably means the training process didn't benefit the representation learning well in my opinion. Could you pls give some analysis or explanations?

    Epoch 1, Loss 420.73120760917664
    acc_val, acc = 0.72 0.709
    Epoch 2, Loss 407.65890550613403
    acc_val, acc = 0.701 0.696
    Epoch 3, Loss 395.5375609397888
    acc_val, acc = 0.673 0.679
    Epoch 4, Loss 383.75150847435
    acc_val, acc = 0.707 0.7020000000000001
    Epoch 5, Loss 371.76993465423584
    acc_val, acc = 0.6759999999999999 0.688
    Epoch 6, Loss 361.50709533691406
    acc_val, acc = 0.7020000000000001 0.701
    Epoch 7, Loss 352.6328740119934
    acc_val, acc = 0.707 0.705
    Epoch 8, Loss 340.92082047462463
    acc_val, acc = 0.72 0.711
    Epoch 9, Loss 334.9960980415344
    acc_val, acc = 0.694 0.688
    Epoch 10, Loss 325.21264362335205
    acc_val, acc = 0.7 0.7070000000000001
    Epoch 11, Loss 318.04585337638855
    acc_val, acc = 0.7089999999999999 0.7230000000000001
    Epoch 12, Loss 310.82839179039
    acc_val, acc = 0.7070000000000001 0.722
    Epoch 13, Loss 304.04966139793396
    acc_val, acc = 0.72 0.7250000000000001
    Epoch 14, Loss 299.41626381874084
    acc_val, acc = 0.744 0.715
    Epoch 15, Loss 293.7600963115692
    acc_val, acc = 0.6910000000000001 0.7220000000000001
    Epoch 16, Loss 288.6614990234375
    acc_val, acc = 0.703 0.7270000000000001
    Epoch 17, Loss 285.12760519981384
    acc_val, acc = 0.717 0.726
    Epoch 18, Loss 280.9172787666321
    acc_val, acc = 0.736 0.7289999999999999
    Epoch 19, Loss 277.5624203681946
    acc_val, acc = 0.71 0.715
    Epoch 20, Loss 274.18093502521515
    acc_val, acc = 0.7150000000000001 0.7289999999999999
    

    Btw, the final result of the seed should be 0.7289(epoch 18) due to the highest val_acc.(0.736), right? Thanks in advance!

    opened by ha-lins 7
  • bugs report

    bugs report

    Hi Yuning,

    There are some errors in the code.

    When I run mask and edge in unsupervised_Cora_Citeseer using python -u execute.py --dataset citeseer --aug_type mask --drop_percent 0.20 --seed 39 --save_name cite_best_dgi.pkl --gpu 0, there will have unassignment error as follows:

    Traceback (most recent call last): File "execute.py", line 189, in <module> sparse, None, None, None, aug_type=aug_type) File "/anaconda3/envs/graph/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/GraphCL/unsupervised_Cora_Citeseer/models/dgi.py", line 51, in forward ret1 = self.disc(c_1, h_0, h_2, samp_bias1, samp_bias2) UnboundLocalError: local variable 'c_1' referenced before assignment

    Here are my env version: python==3.7, torch==1.5.0, torch-geometric==1.5.0. I hope you will notice and fix them.

    opened by flyingtango 7
  • Question about the unsupervised_TU gsimclr.py

    Question about the unsupervised_TU gsimclr.py

    Hi GraphCL team, thanks for your excellent work.

    I have some questions about the loss function in unsupervised_TU/gsimclr.py: in your article eq.(3), you claim that the positive pairs and negative pairs are composed of augmentations, but in the code I find that you use the original sample and one augmentation to form a positive pair. I don't understand the differences between these two methods and wish you could give me some suggestions or explanation. Thank you in advance.

    opened by scottshufe 6
  • Confused about unsupervised node classification on cora/citeseer

    Confused about unsupervised node classification on cora/citeseer

    Dear authors, It seems that you still follow a local-global contrastive manner when applying the model to node classification tasks(it's similar to 'mvgrl' but adopts different data augmentations). Is there any reason for not directly computing the similarities of nodes' representations within a batch and applying the NT-XENT loss?

    opened by hengruizhang98 5
  • Question about Unsupervised_TU

    Question about Unsupervised_TU

    @yyou1996 hi, yuning, May I ask you about details about experiments? In readme, your said $GPU_ID is the lanched GPU ID and $AUGMENTATION could be random2, random3, random4 that sampling from {NodeDrop, Subgraph}, {NodeDrop, Subgraph, EdgePert} and {NodeDrop, Subgraph, EdgePert, AttrMask}, seperately. So the result in paper leverages random2 to random4 repeatly as multiple run with mean & std reported is performed in your paper?

    opened by junkangwu 4
  • How to speed up the pretraining on chem in Transferlearning experiment?

    How to speed up the pretraining on chem in Transferlearning experiment?

    Hi @yyou1996,

    I wonder how to speed up the pretraining on chem data. How long did you use to pretrain one epoch on it? Which gpu version did you use? I think the speed bottleneck is the CPU or io. I tried increasing the num_workers while it seems no effects.

    opened by ha-lins 4
  • Bugs of subgraph augmentation

    Bugs of subgraph augmentation

    Hi, when browsing the code, I found there might be bugs in the subgraph augmentation.

    In Python 3.7+, the union operation is not an inplace operation, so that idx_neigh will not be updated properly. In this situation, only 1-hop subgraph of a random centor node will be generated.

    Some examples:

    https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/semisupervised_TU/pre-training/tu_dataset.py#L251

    https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/unsupervised_TU/aug.py#L371

    https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/transferLearning_MoleculeNet_PPI/bio/loader.py#L310

    https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/transferLearning_MoleculeNet_PPI/chem/loader.py#L844

    opened by hyp1231 4
  • 运行过程未结束就‘Early stopping!’

    运行过程未结束就‘Early stopping!’

    Traceback (most recent call last): File "execute.py", line 222, in tot = tot.cuda() File "/HOME/scz1727/run/ENTER/envs/pytorchzq/lib/python3.8/site-packages/torch/cuda/init.py", line 216, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available 谢谢,期待您的解答。

    opened by xiangxinchufa 1
  • A question about Semisupervised_TU in pre_training

    A question about Semisupervised_TU in pre_training

    main.py 320 run_exp_benchmark()

    main.py 278 run_exp_benchmark run_exp_lib(create_n_filter_triples(datasets, feat_strs, nets,

    main.py 183 run_exp_lib cross_validation_with_val_set(

    train_eval.py 115 cross_validation_with_val_set train_loss, _ = train(

    train_eval.py 234 train out1 = model.forward_cl(data1)

    res_gcn.py 173 forward_cl return self.forward_BNConvReLU_cl(x, edge_index, batch, xg)

    res_gcn.py 180 forward_BNConvReLU_cl x_ = F.relu(conv(x_, edge_index))

    module.py 1186 _call_impl return forward_call(*input, **kwargs)

    gcn_conv.py 103 forward edge_index, norm = GCNConv.norm(

    gcn_conv.py 90 norm deg = scatter_add(edge_weight, row, dim=0, dim_size=num_nodes)

    scatter.py 27 scatter_add return scatter_sum(src, index, dim, out, dim_size)

    scatter.py 9 scatter_sum index = broadcast(index, src, dim)

    utils.py 12 broadcast src = src.expand(other.size())

    RuntimeError: expand(torch.cuda.LongTensor{[2, 12108]}, size=[12108]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

    There is a error in scatter_add() function,this error remain that the size of src tensor is not euqal to the size of index tensor.I have tried many times but it is still not work.Could you give a solution about this question? Thanks

    opened by ytpjh 1
  • graph classification

    graph classification

    Thank you for your work, but I have some problems that need your reply. What I see in the paper is graph classification, but what I do in the code is node classification. May I ask where is the code for the classification of pictures?

    opened by renhl717445 1
  • Unsupervised learning with self created dataset

    Unsupervised learning with self created dataset

    I have tried my dataset on your unsupervised learning framework, which num_of_edge will exceed 10^6. When I load the data, there is an assertion error.


    loading GCC 7.3.1 based on SCL Developer Toolset 7


    loading CUDA 10.1 with cuDNN / NCCL based on cntr cuda:10.1-cudnn7-devel-centos7

    /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [21,0,0], thread: [6,0,0] Assertion srcIndex < srcSelectDimSize failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [21,0,0], thread: [7,0,0] Assertion srcIndex < srcSelectDimSize failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [21,0,0], thread: [51,0,0] Assertion srcIndex < srcSelectDimSize failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [20,0,0], thread: [88,0,0] Assertion srcIndex < srcSelectDimSize failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [20,0,0], thread: [89,0,0] Assertion srcIndex < srcSelectDimSize failed. /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2, IndexIsMajor = true]: block: [20,0,0], thread: [90,0,0] Assertion srcIndex < srcSelectDimSize failed. Processing... Done! 5264 1

    lr: 0.01 num_features: 1 hidden_dim: 32 num_gc_layers: 4

    dataset_num_classes: 7 Traceback (most recent call last): File "gsimclr.py", line 189, in emb, y = model.encoder.get_embeddings(dataloader_eval) File "/home/u8411596/GraphCL-master/unsupervised_TU/gin.py", line 83, in get_embeddings x, _ = self.forward(x, edge_index, batch) File "/home/u8411596/GraphCL-master/unsupervised_TU/gin.py", line 56, in forward x = F.relu(self.convs[i](x, edge_index)) File "/home/u8411596/.conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/u8411596/.conda/envs/py36/lib/python3.6/site-packages/torch_geometric/nn/conv/gin_conv.py", line 67, in forward out += (1 + self.eps) * x_r RuntimeError: CUDA error: device-side assert triggered

    I am wondering the learning framework may have length of data limitation and want some suggestion from you to solve this problem. Thank you!

    opened by LA11131110128 1
  • Question about data augmentation

    Question about data augmentation

    Question about data augmentation: When subgraph augmentation is performed, essentially only the enge_index is modified. This means that only the subgraph is aggregated for information propagation. What puzzles me is that when computing the graph level features, the pooling function involves the features of all nodes of the graph (selected subgraph nodes and unselected nodes), does this make sense? Isn't the pooling at this point only considering subgraphs?

    opened by Struggle-Forever 1
Owner
Shen Lab at Texas A&M University
Shen Lab at Texas A&M University
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Facilitates implementing deep neural-network backbones, data augmentations

Introduction Nowadays, the training of Deep Learning models is fragmented and unified. When AI engineers face up with one specific task, the common wa

null 40 Dec 29, 2022
AugMax: Adversarial Composition of Random Augmentations for Robust Training

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 31 Oct 28, 2021
torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations

??A high level pipeline for face landmarks detection, supports training, evaluating, exporting, inference and 100+ data augmentations, compatible with torchvision and albumentations, can easily install with pip.

DefTruth 142 Dec 25, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21

ANEMONE A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21 Dependencies python==3.6.1 dgl==

Graph Analysis & Deep Learning Laboratory, GRAND 30 Dec 14, 2022
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 8, 2023
Saeed Lotfi 28 Dec 12, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 8, 2023
PyGCL: A PyTorch Library for Graph Contrastive Learning

PyGCL is a PyTorch-based open-source Graph Contrastive Learning (GCL) library, which features modularized GCL components from published papers, standa

PyGCL 588 Dec 31, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 4, 2020
Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research) This is a PyTorch implementation of the CoMatch paper [B

Salesforce 107 Dec 14, 2022
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"

GCA Source code for Graph Contrastive Learning with Adaptive Augmentation (WWW 2021) For example, to run GCA-Degree under WikiCS, execute: python trai

Big Data and Multi-modal Computing Group, CRIPAC 97 Jan 7, 2023
[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated [talk] [poster] [appendix] Yuning You, Tianlong C

Shen Lab at Texas A&M University 80 Nov 23, 2022
This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels].

CGPN This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels]. Req

null 10 Sep 12, 2022
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022