An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Overview

Federated Averaging (FedAvg) in PyTorch arXiv

An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-Efficient Learning of Deep Networks from Decentralized Data in PyTorch. (implemented in Python 3.9.2.)

Implementation points

  • Exactly implement the models ('2NN' and 'CNN' mentioned in the paper) to have the same number of parameters written in the paper.
    • 2NN: TwoNN class in models.py; 199,210 parameters
    • CNN: CNN class in models.py; 1,663,370 parameters
  • Exactly implement the non-IID data split.
    • Each client has at least two digits in case of using MNIST dataset.
  • Implement multiprocessing of client update and client evaluation.
  • Support TensorBoard for log tracking.

Requirements

  • See requirements.txt

Configurations

  • See config.yaml

Run

  • python3 main.py

Results

MNIST

  • Number of clients: 100 (K = 100)
  • Fraction of sampled clients: 0.1 (C = 0.1)
  • Number of rounds: 500 (R = 500)
  • Number of local epochs: 10 (E = 10)
  • Batch size: 10 (B = 10)
  • Optimizer: torch.optim.SGD
  • Criterion: torch.nn.CrossEntropyLoss
  • Learning rate: 0.01
  • Momentum: 0.9
  • Initialization: Xavier

Table 1. Final accuracy and the best accuracy

Model Final Accuracy(IID) (Round) Best Accuracy(IID) (Round) Final Accuracy(non-IID) (Round) Best Accuracy(non-IID) (Round)
2NN 98.38% (500) 98.45% (483) 97.50% (500) 97.65% (475)
CNN 99.31% (500) 99.34% (197) 98.73% (500) 99.28% (493)

Table 2. Final loss and the least loss

Model Final Loss(IID) (Round) Least Loss(IID) (Round) Final Loss(non-IID) (Round) Least Loss(non-IID) (Round)
2NN 0.09296 (500) 0.06956 (107) 0.09075 (500) 0.08257 (475)
CNN 0.04781 (500) 0.02497 (86) 0.04533 (500) 0.02413 (366)

Figure 1. MNIST 2NN model accuracy (IID: top / non-IID: bottom) iidmnist run-Accuracy_ MNIST _TwoNN C_0 1, E_10, B_10, IID_False-tag-Accuracy

Figure 2. MNIST CNN model accuracy (IID: top / non-IID: bottom) run-Accuracy_ MNIST _CNN C_0 1, E_10, B_10, IID_True-tag-Accuracy Accuracy

TODO

  • Do CIFAR experiment (CIFAR10 dataset) & large-scale LSTM experiment (Shakespeare dataset)
  • Learning rate scheduling
  • More experiments with other hyperparameter settings (e.g., different combinations of B, E, K, and C)
You might also like...
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Plato: A New Framework for Federated Learning Research

a new software framework to facilitate scalable federated learning research.

A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

FedScale: Benchmarking Model and System Performance of Federated Learning
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

Comments
  • The accuracy on Cifar10 may be low.

    The accuracy on Cifar10 may be low.

    I ran the FedAvg code with the CNN2 given in model.py, with regard to the Cifar10 dataset. I also excluded the model initialization in server.py, and all of the clients (only 10) were set to update and upload their models to the server. However, over about 100 rounds, the accuracy can only raise up to around 70%, and do not go up afterwards. I wonder if there is anything I've missed or mistaken. Could anyone please offer me some advice?

    opened by ADAM0064 3
  • maximum recursion depth exceeded while calling a Python object error

    maximum recursion depth exceeded while calling a Python object error

    Good evening sir.

    please I come across this error

    "RecursionError: maximum recursion depth exceeded while calling a Python object"

    whenever i run main.py even after installing all the dependencies please is there anything I can do to make the code work?

    opened by brainie 3
  • AssertionError: Invalid device id

    AssertionError: Invalid device id

    So while trying to set up my Cuda GPU and installing pytorch so as to run main.py. I came across this error

      File "main.py", line 57, in <module>
        central_server.setup()
      File "/content/Federated-Averaging-PyTorch/src/server.py", line 85, in setup
        init_net(self.model, **self.init_config)
      File "/content/Federated-Averaging-PyTorch/src/utils.py", line 77, in init_net
        model = nn.DataParallel(model, gpu_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 142, in __init__
        _check_balance(self.device_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 23, in _check_balance
        dev_props = _get_devices_properties(device_ids)
      File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 487, in _get_devices_properties
        return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
      File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 487, in <listcomp>
        return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
      File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 470, in _get_device_attr
        return get_member(torch.cuda)
      File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 487, in <lambda>
        return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
      File "/usr/local/lib/python3.7/dist-packages/torch/cuda/__init__.py", line 361, in get_device_properties
        raise AssertionError("Invalid device id")
    AssertionError: Invalid device id 
    

    How can i fix the device ID issue.

    opened by brainie 2
  • undefined variable error

    undefined variable error

    In "utils.py", modules of Torch Vision are imported by from torchvision import datasets, transforms.

    However, in the line 106 of the same file, these modules are referred to by torchvision.datasets, but there is no variable named by torchvision.

    I fixed it by rewrite the import statement by just import torchvision. Now it works.

    opened by henry174Ajou 1
Owner
Seok-Ju Hahn
atta-dipa dhamma-dipa
Seok-Ju Hahn
TianyuQi 10 Dec 11, 2022
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 226 Dec 30, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Race Wang 57 Dec 27, 2022
Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data based on Pytorch Framework

VFedPCA+VFedAKPCA This is the official source code for the Paper: Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-

John 9 Sep 18, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

Med-AIR@CUHK 156 Dec 15, 2022