An easy-to-use federated learning platform

Overview

federatedscope-logo

Website Playground Contributing

FederatedScope is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, FederatedScope integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.

A detailed tutorial is provided on our website.

News

  • [06-17-2022] We release pFL-Bench, a comprehensive benchmark for personalized Federated Learning (pFL), containing 10+ datasets and 20+ baselines. [code, pdf]
  • [06-17-2022] We release FedHPO-B, a benchmark suite for studying federated hyperparameter optimization. [code, pdf]
  • [06-17-2022] We release B-FHTL, a benchmark suit for studying federated hetero-task learning. [code, pdf]
  • [06-13-2022] Our project was receiving an attack, which has been resolved. More details.
  • [05-25-2022] Our paper FederatedScope-GNN has been accepted by KDD'2022!
  • [05-06-2022] We release FederatedScope v0.1.0!

Quick Start

We provide an end-to-end example for users to start running a standard FL course with FederatedScope.

Step 1. Installation

First of all, users need to clone the source code and install the required packages (we suggest python version >= 3.9).

git clone https://github.com/alibaba/FederatedScope.git
cd FederatedScope

You can install the dependencies from the requirement file:

# For minimal version
conda install --file enviroment/requirements-torch1.10.txt -c pytorch -c conda-forge -c nvidia

# For application version
conda install --file enviroment/requirements-torch1.10-application.txt -c pytorch -c conda-forge -c nvidia -c pyg

or build docker image and run with docker env (cuda 11 and torch 1.10):

docker build -f enviroment/docker_files/federatedscope-torch1.10.Dockerfile -t alibaba/federatedscope:base-env-torch1.10 .
docker run --gpus device=all --rm -it --name "fedscope" -w $(pwd) alibaba/federatedscope:base-env-torch1.10 /bin/bash

If you need to run with down-stream tasks such as graph FL, change the requirement/docker file name into another one when executing the above commands:

# enviroment/requirements-torch1.10.txt -> 
enviroment/requirements-torch1.10-application.txt

# enviroment/docker_files/federatedscope-torch1.10.Dockerfile ->
enviroment/docker_files/federatedscope-torch1.10-application.Dockerfile

Note: You can choose to use cuda 10 and torch 1.8 via changing torch1.10 to torch1.8. The docker images are based on the nvidia-docker. Please pre-install the NVIDIA drivers and nvidia-docker2 in the host machine. See more details here.

Finally, after all the dependencies are installed, run:

python setup.py install

# Or (for dev mode)
pip install -e .

Step 2. Prepare datasets

To run an FL task, users should prepare a dataset. The DataZoo provided in FederatedScope can help to automatically download and preprocess widely-used public datasets for various FL applications, including CV, NLP, graph learning, recommendation, etc. Users can directly specify cfg.data.type = DATASET_NAMEin the configuration. For example,

cfg.data.type = 'femnist'

To use customized datasets, you need to prepare the datasets following a certain format and register it. Please refer to Customized Datasets for more details.

Step 3. Prepare models

Then, users should specify the model architecture that will be trained in the FL course. FederatedScope provides a ModelZoo that contains the implementation of widely adopted model architectures for various FL applications. Users can set up cfg.model.type = MODEL_NAME to apply a specific model architecture in FL tasks. For example,

cfg.model.type = 'convnet2'

FederatedScope allows users to use customized models via registering. Please refer to Customized Models for more details about how to customize a model architecture.

Step 4. Start running an FL task

Note that FederatedScope provides a unified interface for both standalone mode and distributed mode, and allows users to change via configuring.

Standalone mode

The standalone mode in FederatedScope means to simulate multiple participants (servers and clients) in a single device, while participants' data are isolated from each other and their models might be shared via message passing.

Here we demonstrate how to run a standard FL task with FederatedScope, with setting cfg.data.type = 'FEMNIST'and cfg.model.type = 'ConvNet2' to run vanilla FedAvg for an image classification task. Users can customize training configurations, such as cfg.federated.total_round_num, cfg.data.batch_size, and cfg.optimizer.lr, in the configuration (a .yaml file), and run a standard FL task as:

# Run with default configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml
# Or with custom configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml federated.total_round_num 50 data.batch_size 128

Then you can observe some monitored metrics during the training process as:

INFO: Server #0 has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: Client has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: {'Role': 'Client #5', 'Round': 0, 'Results_raw': {'train_loss': 207.6341676712036, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.152683353424072}}
INFO: {'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_loss': 209.0940284729004, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1818805694580075}}
INFO: {'Role': 'Client #8', 'Round': 0, 'Results_raw': {'train_loss': 202.24929332733154, 'train_acc': 0.04, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.0449858665466305}}
INFO: {'Role': 'Client #6', 'Round': 0, 'Results_raw': {'train_loss': 209.43883895874023, 'train_acc': 0.06, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1887767791748045}}
INFO: {'Role': 'Client #9', 'Round': 0, 'Results_raw': {'train_loss': 208.83140087127686, 'train_acc': 0.0, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1766280174255375}}
INFO: ----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 163.029045
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...

Distributed mode

The distributed mode in FederatedScope denotes running multiple procedures to build up an FL course, where each procedure plays as a participant (server or client) that instantiates its model and loads its data. The communication between participants is already provided by the communication module of FederatedScope.

To run with distributed mode, you only need to:

  • Prepare isolated data file and set up cfg.distribute.data_file = PATH/TO/DATA for each participant;
  • Change cfg.federate.model = 'distributed', and specify the role of each participant by cfg.distributed.role = 'server'/'client'.
  • Set up a valid address by cfg.distribute.host = x.x.x.x and cfg.distribute.port = xxxx. (Note that for a server, you need to set up server_host/server_port for listening messge, while for a client, you need to set up client_host/client_port for listening and server_host/server_port for sending join-in applications when building up an FL course)

We prepare a synthetic example for running with distributed mode:

# For server
python main.py --cfg federatedscope/example_configs/distributed_server.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx

# For clients
python main.py --cfg federatedscope/example_configs/distributed_client_1.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
python main.py --cfg federatedscope/example_configs/distributed_client_2.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
python main.py --cfg federatedscope/example_configs/distributed_client_3.yaml distribute.data_file 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx

An executable example with generated toy data can be run with:

# Generate the toy data
python scripts/gen_data.py

# Firstly start the server that is waiting for clients to join in
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_server.yaml distribute.data_file toy_data/server_data distribute.server_host 127.0.0.1 distribute.server_port 50051

# Start the client #1 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_1.yaml distribute.data_file toy_data/client_1_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50052
# Start the client #2 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_2.yaml distribute.data_file toy_data/client_2_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50053
# Start the client #3 (with another process)
python federatedscope/main.py --cfg federatedscope/example_configs/distributed_client_3.yaml distribute.data_file toy_data/client_3_data distribute.server_host 127.0.0.1 distribute.server_port 50051 distribute.client_host 127.0.0.1 distribute.client_port 50054

And you can observe the results as (the IP addresses are anonymized with 'x.x.x.x'):

INFO: Server #0: Listen to x.x.x.x:xxxx...
INFO: Server #0 has been set up ...
Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
INFO: Client: Listen to x.x.x.x:xxxx...
INFO: Client (address x.x.x.x:xxxx) has been set up ...
Client (address x.x.x.x:xxxx) is assigned with #1.
INFO: Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
{'Role': 'Client #2', 'Round': 0, 'Results_raw': {'train_avg_loss': 5.215108394622803, 'train_loss': 333.7669372558594, 'train_total': 64}}
{'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_total': 64, 'train_loss': 290.9668884277344, 'train_avg_loss': 4.54635763168335}}
----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 30.387419
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...

Advanced

As a comprehensive FL platform, FederatedScope provides the fundamental implementation to support requirements of various FL applications and frontier studies, towards both convenient usage and flexible extension, including:

  • Personalized Federated Learning: Client-specific model architectures and training configurations are applied to handle the non-IID issues caused by the diverse data distributions and heterogeneous system resources.
  • Federated Hyperparameter Optimization: When hyperparameter optimization (HPO) comes to Federated Learning, each attempt is extremely costly due to multiple rounds of communication across participants. It is worth noting that HPO under the FL is unique and more techniques should be promoted such as low-fidelity HPO.
  • Privacy Attacker: The privacy attack algorithms are important and convenient to verify the privacy protection strength of the design FL systems and algorithms, which is growing along with Federated Learning.
  • Graph Federated Learning: Working on the ubiquitous graph data, Graph Federated Learning aims to exploit isolated sub-graph data to learn a global model, and has attracted increasing popularity.
  • Recommendation: As a number of laws and regulations go into effect all over the world, more and more people are aware of the importance of privacy protection, which urges the recommender system to learn from user data in a privacy-preserving manner.
  • Differential Privacy: Different from the encryption algorithms that require a large amount of computation resources, differential privacy is an economical yet flexible technique to protect privacy, which has achieved great success in database and is ever-growing in federated learning.
  • ...

More supports are coming soon! We have prepared a tutorial to provide more details about how to utilize FederatedScope to enjoy your journey of Federated Learning!

Materials of related topics are constantly being updated, please refer to FL-Recommendation, Federated-HPO, Personalized FL, Federated Graph Learning, FL-NLP, FL-privacy-attacker and so on.

Documentation

The classes and methods of FederatedScope have been well documented so that users can generate the API references by:

pip install -r requirements-doc.txt
make html

We put the API references on our website.

License

FederatedScope is released under Apache License 2.0.

Publications

If you find FederatedScope useful for your research or development, please cite the following paper:

@article{federatedscope,
  title = {FederatedScope: A Flexible Federated Learning Platform for Heterogeneity},
  author = {Xie, Yuexiang and Wang, Zhen and Chen, Daoyuan and Gao, Dawei and Yao, Liuyi and Kuang, Weirui and Li, Yaliang and Ding, Bolin and Zhou, Jingren},
  journal={arXiv preprint arXiv:2204.05011},
  year = {2022},
}

More publications can be found in the Publications.

Contributing

We greatly appreciate any contribution to FederatedScope! You can refer to Contributing to FederatedScope for more details.

Welcome to join in our Slack channel, or DingDing group (please scan the following QR code) for discussion.

federatedscope-logo

Issues
  • Support optimizers with different parameters

    Support optimizers with different parameters

    • This PR is to solve the issue #91
    • Solution
      • Specific the parameters of the local optimizer by adding new parameters under the config cfg.optimizer and cfg.fedopt.optimizer. :
      • The calling of get_optimizer is as follows
        optimizer = get_optimizer(model=model, **cfg.optimizer)
    
    • Example:
      • Taking cfg.optimizer as an example, the original config file is as follows
        # ------------------------------------------------------------------------ #
        # Optimizer related options
        # ------------------------------------------------------------------------ #
        cfg.optimizer = CN(new_allowed=True)
    
        cfg.optimizer.type = 'SGD'
        cfg.optimizer.lr = 0.1
    
    • By setting new_allowed=True in cfg.optimizer, we allow the users to add new parameters according to the type of their optimizers. For example, if I want to use the optimizer registered as myoptimizer, as well as its new parameters mylr and mynorm. I just need to write the yaml file as follows, and the new parameters will be added automatically.
    optimizer:
        type: myoptimizer
        mylr: 0.1
        mynorm: 1
    
    bug 
    opened by DavdGao 7
  • Redundancy in the log files

    Redundancy in the log files

    A Fedavg on 5% of FEMNIST trail will produce a 500 kb log each round: with 80% eval logs like 2022-04-13 16:33:24,901 (client:264) INFO: Client #1: (Evaluation (test set) at Round #26) test_loss is 79.352451. And 10% is server results and 10% is train informations.

    If the round is 500, 1000, or much larger, the log files will take up too much space with a lot of redundancy. @yxdyc

    enhancement 
    opened by rayrayraykk 6
  • report cuda error when trying to launch up the demo case

    report cuda error when trying to launch up the demo case

    Hi when I am trying to launch up the demo case, cuda relevant error was reported as below:

    I am using conda to manage the environment. in other env I have the pytorch works on cuda without any problem. I think this could be the installation issue-- I did not install anything by myself, totally following your guidance. My cuda version: NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6
    and my torch version: 1.10.1

    (fedscope) [email protected]:~/prj/FederatedScope$ python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml
    
    ...
    2022-05-13 22:06:09,249 (server:520) INFO: ----------- Starting training (Round #0) -------------
    Traceback (most recent call last):
     File "/home/liangma/prj/FederatedScope/federatedscope/main.py", line 41, in <module>
       _ = runner.run()
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/fed_runner.py", line 136, in run
       self._handle_msg(msg)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/fed_runner.py", line 254, in _handle_msg
       self.client[each_receiver].msg_handlers[msg.msg_type](msg)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/worker/client.py", line 202, in callback_funcs_for_model_para
       sample_size, model_para_all, results = self.trainer.train()
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 374, in train
       self._run_routine("train", hooks_set, target_data_split_name)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 208, in _run_routine
       hook(self.ctx)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/federatedscope-0.1.0-py3.9.egg/federatedscope/core/trainers/trainer.py", line 474, in _hook_on_fit_start_init
       ctx.model.to(ctx.device)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 899, in to
       return self._apply(convert)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 570, in _apply
       module._apply(fn)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 593, in _apply
       param_applied = fn(param)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/nn/modules/module.py", line 897, in convert
       return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
     File "/home/liangma/miniconda3/envs/fedscope/lib/python3.9/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
       raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    
    opened by lmaxeniro 5
  • yo bro,我想问问跟fate比起来,scope有没有提供像fate的federatedML那样的联邦算法库?

    yo bro,我想问问跟fate比起来,scope有没有提供像fate的federatedML那样的联邦算法库?

    请教两个问题: 1、federatedScope有没有提供像fate的federatedML那样的联邦算法库?fate的federatedMl在横向纵向lr、神经网络、决策树等等都有联邦算法提供可以直接使用 2、federatedScope的架构实现上相较fate有没有更优异的地方? 3、federatedScope有没有已经现网生产落地部署、或者是商用的案例?

    opened by MiKKiYang 4
  • [bug_fix] fix call_link_level_trainer() and call_node_level_trainer()

    [bug_fix] fix call_link_level_trainer() and call_node_level_trainer()

    This PR fixes call_link_level_trainer() and call_node_level_trainer() raising ValueError when trainer_type doesn't match. When a custom trainer is registered, the above functions should return None instead of raising the error, so that trainer_builder can properly get the custom trainer.

    opened by ahn1340 3
  • CIKMCUP fedavg baseline doesn't save predictions

    CIKMCUP fedavg baseline doesn't save predictions

    Hello,

    first of all, thank you for creating a valuable library and organizing this competition!

    I tried to run the fedavg baseline, as stated in the cikmcup documentation, without any modification. I see that the code runs correctly, but contrary to what is written in the documentation, it saves neither the models nor the submission file. Afterwards, I created prediction folder manually and added prediction_path: prediction under eval of in federatedscope/gfl/baseline/fedavg_gin_minibatch_on_cikmcup.yaml, but it still does not save anything.

    Could you please let me know how to create the models and the submission file properly, and maybe refer me to the part of the code where the saving happens? Thank you very much.

    bug 
    opened by ahn1340 3
  • Cannot run gfl with link prediction dataset RecSys.

    Cannot run gfl with link prediction dataset RecSys.

    For link-level recommendation system dataset, e.g. RecSys(name='ciao'), there is None for processed data.x, which is necessary for model building. So we cannot run it successfully. Please check it. Located at federatedscope/gfl/model/model_builder.py image

    Besides, there is a 404 URL bug for downloading dataset for RecSys if self.FL is True, which is caused by modified self.name. Located at federatedscope/gfl/dataset/recsys.py image You can keep the same self.name for downloading, and modify the raw_dir() and processed_dir() method to distinguish the FL dataset and client dataset.

    opened by Starry-Hu 3
  • [Feature]Modification of the finetune mechanism

    [Feature]Modification of the finetune mechanism

    This PR is mainly for the modification of the finetune mechansim (#148 ), but we also make small change for other functions as following

    Finetune

    • Move partial parameters from cfg.federate into cfg.train as they are more relevant to the training, including
      • local_update_step
      • batch_or_epoch
    • Creat cfg.finetune and cfg.train in the config to support different parameters for finetuning and training (e.g. optimizer.lr)
    • Implement finetune function in the basic trainer
    • Modify most existing shells and yaml files to fit the new setting (except the files under the directory benchmark)

    Enums and Decorators

    • Create enums.py to avoid using string and the inconsistency issues
    • Create decorators.py to keep the code clean

    Optimizer

    • Initialize the ctx.optimizer in the beginning of the routine function rather than within the context to solve #136

    To be discussed

    @joneswong please check if the following modifications are appropriate

    • In this PR, use_diff is implemented by a decorator use_diff.
    • Some hpo configs are modifed to fit the new configuration.
    enhancement 
    opened by DavdGao 3
  • Support help and required argument for the configs

    Support help and required argument for the configs

    As the title said, now we can set argument as

    cfg.fedopt.optimizer.type = Argument(
            'SGD', description="optimizer type for FedOPT")
    

    And the main.py will print help info as follows: image

    enhancement 
    opened by yxdyc 2
  • Support run FL courses when the server doesn't have data

    Support run FL courses when the server doesn't have data

    In the current version, we need the server_data for building a model and getting a trainer, but in some cases, the server doesn't have the data or only has test data. Furthermore, does the server really need a trainer? I find that the server's trainer is only used for evaluation.

    enhancement 
    opened by xieyxclack 2
  • AttributeError in distributed mode -- (Avoid type conversion outside worker)

    AttributeError in distributed mode -- (Avoid type conversion outside worker)

    Describe the bug When model contains BN layer, the param bn.num_batches_tracked would be convert to int by grpc. But the trainer.update can't handle this situation well.

    image

    A dummy solution:

      def update(self, model_parameters):
          '''
              Called by the FL client to update the model parameters
          Arguments:
              model_parameters (dict): PyTorch Module object's state_dict.
          '''
          for key in model_parameters:
              if isinstance(model_parameters[key], list):
                  model_parameters[key] = torch.FloatTensor(
                      model_parameters[key])
              elif isinstance(model_parameters[key], int):
                  model_parameters[key] = torch.tensor(model_parameters[key], dtype=torch.long)
                  print(key, model_parameters[key])
              elif isinstance(model_parameters[key], float):
                  model_parameters[key] = torch.tensor(model_parameters[key], dtype=torch.float)
          self.ctx.model.load_state_dict(self._param_filter(model_parameters),
                                         strict=False)
    

    or can we solve it before sending the model_param?

    bug 
    opened by rayrayraykk 2
  • The combination of different mode and split leads to wrong calculation for number of batches and number of epochs

    The combination of different mode and split leads to wrong calculation for number of batches and number of epochs

    Describe the bug As the title says, the current number of batches and epochs are calculated for each split as follows:

            ...
            # Process training data
            if self.train_data is not None or self.train_loader is not None:
                # Calculate the number of update steps during training given the
                # local_update_steps
                num_train_batch, num_train_batch_last_epoch, num_train_epoch, \
                    num_total_train_batch = self.pre_calculate_batch_epoch_num(
                        self.cfg.train.local_update_steps)
    
                self.num_train_epoch = num_train_epoch
                self.num_train_batch = num_train_batch
                self.num_train_batch_last_epoch = num_train_batch_last_epoch
                self.num_total_train_batch = num_total_train_batch
    
            # Process evaluation data
            for mode in ["val", "test"]:
                setattr(self, "num_{}_epoch".format(mode), 1)
                if self.get("{}_data".format(mode)) is not None or self.get(
                        "{}_loader".format(mode)) is not None:
                    setattr(
                        self, "num_{}_batch".format(mode),
                        getattr(self, "num_{}_data".format(mode)) //
                        self.cfg.data.batch_size +
                        int(not self.cfg.data.drop_last and bool(
                            getattr(self, "num_{}_data".format(mode)) %
                            self.cfg.data.batch_size)))
                ...
    

    and the fintune and training routine stops at

        def _run_routine(self, ...):
                ...
                # Break in the final epoch
                if self.ctx.cur_mode == 'train' and epoch_i == \
                        self.ctx.num_train_epoch - 1:
                    if batch_i >= self.ctx.num_train_batch_last_epoch - 1:
                        break
                ...
    

    The problems are

    • If we choose test or validate split for training routine, the num_train_batch_last_epoch and num_train_epoch are all wrong(since they are calculated for the training split).
    • If we set different parameters (say local update steps) for finetune and training, they should have different num_train_batch_last_epoch and num_train_epoch.

    Expected behavior The number of batches and epochs should follow the combination of mode and split.

    bug 
    opened by DavdGao 0
  • Encapsulation of Trainer class

    Encapsulation of Trainer class

    In our design, Trainer class is responsible for encapsulating much training and testing details. Thus, we'd better make it clear: what interfaces are needed for Client class to interact with a trainer. IMO, directly accessing a trainer's property (e.g., context) should be forbidden: https://github.com/alibaba/FederatedScope/blob/bc6eb8b6c590af75891dee7645563cffbd3c25dd/federatedscope/core/worker/client.py#L413

    Only in this way can our users develop a trainer in their own ways, respecting these interfaces while totally ignoring our design of the base trainer.

    enhancement 
    opened by joneswong 0
Releases(v0.2.0)
  • v0.2.0(Jul 30, 2022)

    Summarization

    The improvements included in this release (FederatedScope v0.2.0) are summarized as follows:

    • FederatedScope allows users to apply asynchronous training strategies in federated learning with event-driven architecture, including different aggregation conditions, staleness toleration, broadcasting manners, etc. And we support an efficient standalone simulation for cross-device FL with a large number of participants.
    • We add three benchmarks for Federated HPO, Personalized FL, and Hetero-Task FL to promote the application of federated learning in a wide range of scenarios.
    • We ease the installation, setup, and continuous integration (CI), and make them more friendly for users to get started and customize. And useful visualization functionalities are added into FederatedScope for users to monitor the training process and evaluation results.
    • We add paper lists of related topics, including FL-Recommendation, Federated-HPO, Personalized FL, Federated Graph Learning, FL-NLP, FL-Attacker, FL-Incentive-Mechanism, and so on. These materials are constantly being updated.
    • Several novel features are also included in this release, such as performance attacks, organizer, unseen clients generalization, splitter, client sampler, and so on, which enhance FederatedScope's robustness and comprehensiveness.

    Commits

    🚀 Enhancements & Features

    • Add backdoor attack @Alan-Qin (#267)
    • Add organizer to FederatedScope @rayrayraykk (#265, #257)
    • Monitoring the client-wise and global wandb info @yxdyc (#260, #226, #206, #176, #90)
    • More friendly guidance of installation, setup and contribution @rayrayraykk (#255, #192)
    • Add learning rate scheduler in FS @DavdGao (#248)
    • Support different types of keys when communicating via grpc @xieyxclack (#239)
    • Support constructing FL course when server does not have data @xieyxclack (#236)
    • Enabled unseen clients case to check the participation generalization gap @yxdyc (#238, #100)
    • Support more robust type conversion in yaml file @yxdyc (#229)
    • Asynchronous Federated Learning @xieyxclack (#225)
    • Support both pre- and post-merging data for the "global" baseline @yxdyc (#220)
    • Format the code by flake8 @rayrayraykk (#211, #207)
    • Add paper list of FL-Attacker and FL-Incentive-Mechanism @Osier-Yi (#203, #202, #201)
    • Add client samplers @xieyxclack (#200)
    • Modify the log for hooks_in_train/test @DavdGao (#181)
    • Modification of the finetune mechanism @DavdGao (#177)
    • Add FedHPO-B, a benchmark suite for federated hyperparameter optimization @rayrayraykk @joneswong (#173, #146, #127)
    • Add pFL-Bench, a comprehensive benchmark for personalized Federated Learning @yxdyc (#169, #149)
    • Add B-FHTL, a benchmark suite for studying federated hetero-task learning @DavdGao (#167, #150)
    • Update splitter for consistent label distribution @xieyxclack (#154)
    • Improve SHA wrapper @joneswong (#145)
    • Add slack & DingDing group @xieyxclack (#142)
    • Add FedEx @joneswong @rayrayraykk (#141, #137, #120)
    • Enable single thread HPO @joneswong (#140)
    • Refactor autotune module @joneswong (#133)
    • Add paper list of federated database @DavdGao (#129)
    • A quadratic objective function-based experiment @joneswong (#111)
    • Support optimizers with different parameters @DavdGao (#96)
    • Demo how to use SMAC for FedHPO @joneswong (#88)
    • FLIT for federated graph classification/regression @wanghh7 (#87)
    • Add momentum for the optimizer in server @DavdGao (#86)
    • Add an example for distributed mode @xieyxclack (#85)
    • Add readme for vFL @xieyxclack (#83)
    • Add paper list of FL-NLP @cheneydon (#81)
    • Add more models and datasets from external packages. @rayrayraykk (#79, #42)
    • Add pFL paper list @yxdyc (#73, #72)
    • Add paper list of FedRec @xieyxclack (#68)
    • Add paper list of FedHPO @joneswong (#67)
    • Add paper list of federated graph learning. @rayrayraykk (#65)

    🐛 Bug Fixes

    • Fix ditto trainer @yxdyc (#271)
    • Fix personalization when module has lazy load hooks @rayrayraykk (#269)
    • Fix the wrongly early_stopper.track_and_check calling in client @yxdyc (#237)
    • Fix type conversion error and invalid logging in distributed mode @rayrayraykk (#232, #223)
    • Fix the cpu and memory wastage problems caused by multiprocess @yxdyc (#212)
    • Fix for invalid sample_client_num in some situation @yxdyc (#210)
    • Fix the url of GFL dataset @rayrayraykk (#196)
    • Fix twitter dataset @rayrayraykk (#187)
    • BugFix for monitor and logger @rayrayraykk @rayrayraykk (#188, #175, #109)
    • Fix download url @Osier-Yi @rayrayraykk @xieyxclack (#101, #95, #92, #76)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(May 6, 2022)

Owner
Alibaba
Alibaba Open Source
Alibaba
TianyuQi 8 Jul 9, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 174 Aug 1, 2022
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

null 110 Aug 11, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 106 Aug 7, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 63 Jul 27, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 194 Aug 2, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 359 Aug 4, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 104 Aug 3, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

Med-AIR@CUHK 129 Aug 5, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 154 Aug 5, 2022
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 207 Jul 27, 2022
Plato: A New Framework for Federated Learning Research

a new software framework to facilitate scalable federated learning research.

System Group@Theory Lab 171 Aug 3, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 80 Aug 9, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 24 May 2, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 74 Aug 11, 2022
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

null 210 Aug 5, 2022
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

null 24 Aug 1, 2022
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

null 14 Apr 28, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 103 Aug 2, 2022