FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Related tags

Deep Learning FLSim
Overview

Federated Learning Simulator (FLSim)

Federated Learning Simulator (FLSim) is a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such as computer vision and natural text. Currently FLSim supports cross-device FL, where millions of clients' devices (e.g. phones) traing a model collaboratively together.

FLSim is scalable and fast. It supports differential privacy (DP), secure aggregation (secAgg), and variety of compression techniques.

In FL, a model is trained collaboratively by multiple clients that each have their own local data, and a central server moderates training, e.g. by aggregating model updates from multiple clients.

In FLSim, developers only need to define a dataset, model, and metrics reporter. All other aspects of FL training are handled internally by the FLSim core library.

FLSim

Library Structure

FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

Installation

The latest release of FLSim can be installed via pip:

pip install flsim

You can also install directly from the source for the latest features (along with its quirks and potentially ocassional bugs):

git clone https://github.com/facebookresearch/FLSim.git
cd FLSim
pip install -e .

Getting started

To implement a central training loop in the FL setting using FLSim, a developer simply performs the following steps:

  1. Build their own data pipeline to assign individual rows of training data to client devices (to simulate data is distributed across client devices)
  2. Create a corresponding nn/Module model and wrap it in an FL model.
  3. Define a custom metrics reporter that computes and collects metrics of interest (e.g., accuracy) throughout training.
  4. Set the desired hyperparameters in a config.

Usage Example

Tutorials

To see the details, please refer to the tutorials that we have prepared.

Examples

We have prepared the runnable exampels for 2 of the tutorials above:

Contributing

See the CONTRIBUTING for how to contribute to this library.

License

This code is released under Apache 2.0, as found in the LICENSE file.

Comments
  • Bug Fix#36: fix imports in tests.

    Bug Fix#36: fix imports in tests.

    Types of changes

    • [x ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    Bug Fix#36: fix imports in tests.

    How Has This Been Tested (if it applies)

    pytest -ra is able to discover all tests now.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by ghaccount 8
  • Vr

    Vr

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by JohnlNguyen 6
  • Move optimizer_test_utils to optimizers directory

    Move optimizer_test_utils to optimizers directory

    Summary: it is currently located at the top-level tests directory. However the top-level tests directory does not really make sense as each component is organized into its dedicated directory. optimizer_test_utils.py belongs to the optimizer directory in that sense. In this diff, we move the file to the optimizer directory and fixes the reference.

    Differential Revision: D32241821

    CLA Signed fb-exported Merged 
    opened by jessemin 3
  • Does the backend handle Federated learning asynchronously?

    Does the backend handle Federated learning asynchronously?

    I found this repo from this blog: - https://ai.facebook.com/blog/asynchronous-federated-learning/ However I do not find any mentioning on this repo and also I cannot decipher from the code examples whether this is synchronous version or asynchronous version of Federated learning? Can you please clarify this for me? And also if this is the asynchronous version how can I dive deeper in to the libraries and look at the code of implementation for the asynch handling mechanism?

    Thank you

    opened by 111Kaushal 2
  • Remove test_pytorch_local_dataset_factory

    Remove test_pytorch_local_dataset_factory

    Summary: This test had been very flaky about 1+ year ago an d never been revived since then. Deleting it from the codebase.

    Differential Revision: D32415979

    CLA Signed fb-exported Merged 
    opened by jessemin 2
  • FedSGD with virtual batching

    FedSGD with virtual batching

    🚀 Feature

    Motivation

    Create a memory efficient client to run FedSGD. If a client has many examples, running FedSGD (taking the gradient of the model based on all of the client's data) can lead to OOM. In this PR, we fix this problem by still calling optimizer.step once at the end of local training to simulate the effect of FedSGD.>

    opened by JohnlNguyen 0
  • Add Fednova as a benchmark

    Add Fednova as a benchmark

    Summary:

    What?

    Adding FedNova as a benchmark

    Why?

    FedNova is a well known paper that fixes the objective inconsistency problem

    Differential Revision: D34668291

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • Having to `import flsim.configs`  before creating config from json is unintuitive

    Having to `import flsim.configs` before creating config from json is unintuitive

    🚀 Feature

    This code works

    import flsim.configs <-- 
    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    This code doesn't work

    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    Motivation

    Having to import flsim.configs is unintuitive and not clear from the user perspective

    Pitch

    Alternatives

    Additional context

    opened by JohnlNguyen 0
  • Fix sent140 example

    Fix sent140 example

    Summary:

    What?

    Fix tutorial to word embedding to resolve the poor accuracy problem

    Why?

    https://github.com/facebookresearch/FLSim/issues/34

    Differential Revision: D34149392

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    ❓ Questions and Help

    Until we move the questions to another medium, feel free to use this as your question:

    Question

    I tried this tutorial https://github.com/facebookresearch/FLSim/blob/main/tutorials/sent140_tutorial.ipynb And accuracy is less that random guess (50%)!

    Any suggestions or approaches to improve accuracy for this tutorial?

    from tutorial: Running (epoch = 1, round = 1, global round = 1) for Test (epoch = 1, round = 1, global round = 1), Loss/Test: 0.8683878255035598 (epoch = 1, round = 1, global round = 1), Accuracy/Test: 49.61439588688946 {'Accuracy': 49.61439588688946}

    opened by ghaccount 0
Releases(v0.1.0)
  • v0.0.1(Dec 9, 2021)

    We are excited to announce the release of FLSim 0.0.1.

    Introduction

    How does one train a machine learning model without access to user data? Federated Learning (FL) is the technology that answers this question. In a nutshell, FL is a way for many users to learn a machine learning model without sharing data collaboratively. The two scenarios for FL, cross-silo and cross-device. Cross-silo provides technologies for collaborative learning between a few large organizations with massive silo datasets. Cross-device provides collaborative learning between many small user devices with small local datasets. Cross-device FL, where millions or even billions of users cooperate on learning a model, is a much more complex problem and attracted less attention from the research community. We designed FLSim to address the cross-device FL use case.

    Federated Learning at Scale

    Large-scale cross-device Federated Learning (FL) is a federated learning paradigm with several challenges that differentiate it from cross-silo FL: millions of clients coordinating with a central server and training instability due to the significant cohort problem. With these challenges in mind, we built FLSim to be scalable while easy to use, and FLSim can scale to thousands of clients per round using only 1 GPU. We hope FLSim will equip researchers to tackle problems with federated learning at scale.

    FLSim

    Library Structure

    FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

    Included Datasets

    Currently, FLSim supports all datasets from LEAF including FEMNIST, Shakespeare, Sent140, CelebA, Synthetic and Reddit. Additionally, we support MNIST and CIFAR-10.

    Included Algorithms

    FLSim supports standard FedAvg, and other federated learning methods such as FedAdam, FedProx, FedAvgM, FedBuff, FedLARS, and FedLAMB.

    What’s next?

    We hope FLSim will foster large-scale cross-device FL research. Soon, we plan to add support for personalization in early 2022. Throughout 2022, we plan to gather feedback and improve usability. We plan to continue to grow our collection of algorithms, datasets, and models.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Dec 29, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Google 1.5k Jan 2, 2023
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 5, 2023
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 67 Jan 4, 2023
Minimal deep learning library written from scratch in Python, using NumPy/CuPy.

SmallPebble Project status: experimental, unstable. SmallPebble is a minimal/toy automatic differentiation/deep learning library written from scratch

Sidney Radcliffe 92 Dec 30, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
Standalone pre-training recipe with JAX+Flax

Sabertooth Sabertooth is standalone pre-training recipe based on JAX+Flax, with data pipelines implemented in Rust. It runs on CPU, GPU, and/or TPU, b

Nikita Kitaev 26 Nov 28, 2022
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

null 5 Aug 25, 2020
Pytorch Lightning 1.2k Jan 6, 2023
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

null 730 Jan 9, 2023
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
GeDML is an easy-to-use generalized deep metric learning library

GeDML is an easy-to-use generalized deep metric learning library

Borui Zhang 32 Dec 5, 2022