Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

Overview

To run the code

  1. Unzip the package to your local directory;
  2. Run 'pip install -r requirements.txt' to download required packages;
  3. Open file ~/nips_code/src/utils/config.py;
  4. Replace the "change_to_your_current_path" in line 2 of config.py (root_path= "change_to_your_current_path") to your current path;
    • You can change hyper-parameters in config.py according to different testing scenarios;
  5. Run the whole pipline with 'python ~/nips_code/src/system_test.py'.

If you find this work useful for your research, please cite

@inproceedings{zhang2021subgraph,
  title={Subgraph federated learning with missing neighbor generation},
  author={Zhang, Ke and Yang, Carl and Li, Xiaoxiao and Sun, Lichao and Yiu, Siu Ming},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}
You might also like...
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

Python package for missing-data imputation with deep learning
Python package for missing-data imputation with deep learning

MIDASpy Overview MIDASpy is a Python package for multiply imputing missing data using deep learning methods. The MIDASpy algorithm offers significant

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

Official code implementation for
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

The missing CMake project initializer

cmake-init - The missing CMake project initializer Opinionated CMake project initializer to generate CMake projects that are FetchContent ready, separ

(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

Comments
  • Could you share a more detailed hyperparameter setting?

    Could you share a more detailed hyperparameter setting?

    Thanks for your great work! When I run the code with the hyperparameters you provided in your paper, eg.:

    # config.py
    
    dataset = "cora"
    num_owners = 3
    delta=20
    
    num_samples = [5,5]
    batch_size = 64
    latent_dim=128
    steps=10
    epochs_local=1
    lr=0.001
    weight_decay=1e-4
    hidden=32
    dropout=0.5
    
    gen_epochs=20
    num_pred=5
    hidden_portion=0.278
    
    epoch_classifier=50
    classifier_layer_sizes=[64,32]
    
    

    I get the results:

    FedSage+ end!
    
    1/9 [==>...........................] - ETA: 3s - loss: 0.7960 - acc: 0.8281
    3/9 [=========>....................] - ETA: 0s - loss: 0.7997 - acc: 0.8316
    7/9 [======================>.......] - ETA: 0s - loss: 0.8128 - acc: 0.8331
    9/9 [==============================] - 1s 18ms/step - loss: 0.8080 - acc: 0.8358
    
    1/9 [==>...........................] - ETA: 3s - loss: 0.1631 - acc: 0.9688
    4/9 [============>.................] - ETA: 0s - loss: 0.2067 - acc: 0.9469
    7/9 [======================>.......] - ETA: 0s - loss: 0.2316 - acc: 0.9389
    9/9 [==============================] - 1s 18ms/step - loss: 0.2372 - acc: 0.9366
    
    Global model
    
    Global Test Set Metrics:
    	loss: 0.2478
    	acc: 0.9317
    

    I think the test acc for FedSage+ is 0.8358, and the test acc for GlobSage is 93.17 (Correct me if I misunderstand), which has a gap to the reported results in your paper. Therefore, could you share a more detailed hyperparameter setting to reproduce the reported results? (eg., epochs_local, classifier_layer_sizes, gen_epochs, etc.)

    opened by rayrayraykk 3
  • Question about the classifiers

    Question about the classifiers

    I have some questions about the use of classifiers:

    The classifiers you used to train the neighbor generator in models.py is: self.classifier=GNN(nfeat=feat_shape, nhid=config.hidden, nclass=n_classes, dropout=config.dropout) But the classifier you used in classifiers.py for the task is: graphsage_model=GraphSAGE(layer_sizes=config.classifier_layer_sizes, generator=fillG_node_gen, n_samples=config.num_samples)

    I would like to know why not use the same classifier (eg. GraphSAGE) instead of using the expanded graph to train a new classifier?

    opened by AnonymousGeeek 0
  • Packages for CUDA 10.1 environment

    Packages for CUDA 10.1 environment

    Since that requirements need CUDA >=11.0, I have tried code on CUDA 10.1 like below:

    pip install torch==1.7.1
    pip install tensorflow==2.3.2
    pip install dill==0.3.3
    pip install louvain==0.7.0
    pip install pandas==1.2.1
    pip install scikit_learn==1.0
    pip install stellargraph==1.2.1
    pip install python-louvain
    pip install chardet
    pip install Keras==2.4.3
    

    Hope it could help you.

    opened by youngfish42 1
Owner
null
TianyuQi 10 Dec 11, 2022
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 1, 2023
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

structshot Code and data for paper "Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning", Yi Yang and Arz

ASAPP Research 47 Dec 27, 2022
Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval This repo provides personal implementation of paper Approximate Ne

John 8 Oct 7, 2022
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 1, 2022
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022