Personalized Federated Learning using Pytorch (pFedMe)

Overview

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020)

This repository implements all experiments in the paper Personalized Federated Learning with Moreau Envelopes.

Authors: Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen

Full paper: https://arxiv.org/pdf/2006.08848.pdf https://proceedings.neurips.cc/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Paper.pdf

Paper has been accepted by NeurIPS 2020.

This repository does not only implement pFedMe but also FedAvg, and Per-FedAvg algorithms. (Federated Learning using Pytorch)

Software requirements:

  • numpy, scipy, torch, Pillow, matplotlib.

  • To download the dependencies: pip3 install -r requirements.txt

Dataset: We use 2 datasets: MNIST and Synthetic

  • To generate non-idd MNIST Data:

    • Access data/Mnist and run: "python3 generate_niid_20users.py"
    • We can change the number of user and number of labels for each user using 2 variable NUM_USERS = 20 and NUM_LABELS = 2
  • To generate idd MNIST Data (we do not use iid data in the paper):

    • Access data/Mnist and run: "python3 generate_iid_20users.py"
  • To generate niid Synthetic:

    • Access data/Synthetic and run: "python3 generate_synthetic_05_05.py". Similar to MNIST data, the Synthetic data is configurable with the number of users and the numbers of labels for each user.
  • The datasets also are available to download at: https://drive.google.com/drive/folders/1-Z3FCZYoisqnIoLLxOljMPmP70t2TGwB?usp=sharing

Produce experiments and figures

  • There is a main file "main.py" which allows running all experiments.

Using same parameters

  • To produce the comparison experiments for pFedMe using MNIST dataset: MNIST

    • Strongly Convex Case, run below commands:
      
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.005 --personal_learning_rate 0.1 --beta 1 --lamda 15 --num_global_iters 800 --local_epochs 20 --algorithm pFedMe --numusers 5 --times 10
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.005 --num_global_iters 800 --local_epochs 20 --algorithm FedAvg --numusers 5  --times 10
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.005 --beta 0.001  --num_global_iters 800 --local_epochs 20 --algorithm PerAvg --numusers 5  --times 10
      
  • It is noted that each algorithm should be run at least 10 times and then the results are averaged.

  • All the train loss, testing accuracy, and training accuracy will be stored as h5py file in the folder "results". It is noted that we store the data for persionalized model and global of pFedMe in 2 separate files following format: DATASET_pFedMe_p_x_x_xu_xb_x_avg.h5 and DATASET_pFedMe_x_x_xu_xb_x_avg.h5 respectively (pFedMe for global model, pFedMe_p for personalized model of pFedMe, PerAvg_p is for personalized model of PerAvg).

  • In order to plot the figure for convex case, set parameters in file main_plot.py similar to parameters run from previous experiments. It is noted that each experiment with different parameters will have different results, the configuration in the plot function should be modified for each specific case. For example. To plot the comparision in convex case for the above experiments, in the main_plot.py set:

    
      numusers = 5
      num_glob_iters = 800
      dataset = "Mnist"
      local_ep = [20,20,20,20]
      lamda = [15,15,15,15]
      learning_rate = [0.005, 0.005, 0.005, 0.005]
      beta =  [1.0, 1.0, 0.001, 1.0]
      batch_size = [20,20,20,20]
      K = [5,5,5,5]
      personal_learning_rate = [0.1,0.1,0.1,0.1]
      algorithms = [ "pFedMe_p","pFedMe","PerAvg_p","FedAvg"]
      plot_summary_one_figure_mnist_Compare(num_users=numusers, loc_ep1=local_ep, Numb_Glob_Iters=num_glob_iters, lamb=lamda,
                                 learning_rate=learning_rate, beta = beta, algorithms_list=algorithms, batch_size=batch_size, dataset=dataset, k = K, personal_learning_rate = personal_learning_rate)
      
    • NonConvex case:
      
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.005 --personal_learning_rate 0.09 --beta 1 --lamda 15 --num_global_iters 800 --local_epochs 20 --algorithm pFedMe --numusers 5 --times 10
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.005 --num_global_iters 800 --local_epochs 20 --algorithm FedAvg --numusers 5 --times 10
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.005 --beta 0.001  --num_global_iters 800 --local_epochs 20 --algorithm PerAvg --numusers 5 --times 10
      
      To plot the figure for non-convex case, we do similar to convex case, also need to change the parameters in main_plot.py.
  • To produce the comparision experiment for pFedMe using Synthetic dataset: SYNTHETIC

    • Strongly Convex Case:

      
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.005 --personal_learning_rate 0.01 --beta 1 --lamda 20 --num_global_iters 600 --local_epochs 20 --algorithm pFedMe --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.005 --num_global_iters 600 --local_epochs 20 --algorithm FedAvg --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.005 --beta 0.001  --num_global_iters 600 --local_epochs 20 --algorithm PerAvg --numusers 10 --times 10
      
    • NonConvex case:

      
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.005 --personal_learning_rate 0.01 --beta 1 --lamda 20 --num_global_iters 600 --local_epochs 20 --algorithm pFedMe --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.005 --num_global_iters 600 --local_epochs 20 --algorithm FedAvg --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.005 --beta 0.001  --num_global_iters 600 --local_epochs 20 --algorithm PerAvg --numusers 10 --times 10
      

Fine-tuned Parameters:

To produce results in the table of fine-tune parameter:

  • MNIST:

    • Strongly Convex Case:

      
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.01 --personal_learning_rate 0.1 --beta 2 --lamda 15 --num_global_iters 800 --local_epochs 20 --algorithm pFedMe --numusers 5 --times 10
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.02 --num_global_iters 800 --local_epochs 20 --algorithm FedAvg --numusers 5 --times 10
      python3 main.py --dataset Mnist --model mclr --batch_size 20 --learning_rate 0.03 --beta 0.003  --num_global_iters 800 --local_epochs 20 --algorithm PerAvg --numusers 5 --times 10
      
    • NonConvex Case:

      
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.01 --personal_learning_rate 0.05 --beta 2 --lamda 30 --num_global_iters 800 --local_epochs 20 --algorithm pFedMe --numusers 5 --times 10
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.02 --num_global_iters 800 --local_epochs 20 --algorithm FedAvg --numusers 5 --times 10
      python3 main.py --dataset Mnist --model dnn --batch_size 20 --learning_rate 0.02 --beta 0.001  --num_global_iters 800 --local_epochs 20 --algorithm PerAvg --numusers 5 --times 10
      
  • Sythetic:

    • Strongly Convex Case:

      
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.01 --personal_learning_rate 0.01 --beta 2 --lamda 20 --num_global_iters 600 --local_epochs 20 --algorithm pFedMe --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.02 --num_global_iters 600 --local_epochs 20 --algorithm FedAvg --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model mclr --batch_size 20 --learning_rate 0.02 --beta 0.002  --num_global_iters 600 --local_epochs 20 --algorithm PerAvg --numusers 10 --times 10
      
    • NonConvex Case:

      
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.01 --personal_learning_rate 0.01 --beta 2 --lamda 30 --num_global_iters 600 --local_epochs 20 --algorithm pFedMe --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.03 --num_global_iters 600 --local_epochs 20 --algorithm FedAvg --numusers 10 --times 10
      python3 main.py --dataset Synthetic --model dnn --batch_size 20 --learning_rate 0.01 --beta 0.001  --num_global_iters 600 --local_epochs 20 --algorithm PerAvg --numusers 10 --times 10
      

Effect of hyper-parameters:

For all the figures for effect of hyper-parameters, we use Mnist dataset and fix the learning_rate == 0.005 and personal_learning_rate == 0.09 for all experiments. Other parameters are changed according to the experiments. Only in the experiments for the effects of $\beta$, in case $\beta = 4$, we use learning_rate == 0.003 to stable the algorithm.

CIFAR-10 dataset:

The implementation of Cifar10 has been finished. However, we haven't fine-tuned the parameters for all algorithms on Cifar10. Below is the comment to run cifar10 on pFedMe.


python3 main.py --dataset Cifar10 --model cnn --batch_size 20 --learning_rate 0.01 --personal_learning_rate 0.01 --beta 1 --lamda 15 --num_global_iters 800 --local_epochs 20 --algorithm pFedMe --numusers 5 
Comments
  • Is Per-FedAvg implemented properly?

    Is Per-FedAvg implemented properly?

    In the code, when training Per-FedAvg, there are two steps, and each step sample a batch of data and perform parameter update. But in the MAML framework, I think the first step is to obtain a fast weight, and the second step is to update the parameters based on the fast weight of the first step. So why do you update the parameters two times? Are the fundamental differences between Per-FedAvg and FedAvg lie in that the former performs two steps update and the latter performs a one-step update? Is this fair for FedAvg?

    opened by chuanting 11
  • pfedme Optimizer Probelem

    pfedme Optimizer Probelem

    Hello, I want to ask:
    Why is different, code and _algorithm? That is in fedoptimizer.py line 64: p.data = p.data - group['lr'] * (p.grad.data + group['lamda'] * (p.data - localweight.data) + group['mu']*p.data) and Algorithm 1 line 8 捕获

    opened by adam4096 6
  • In the Per-FedAvg experiments, there is always an unignorable gap between the accuracy of the actual experimental results and the expected accuracy provided under the same conditions.

    In the Per-FedAvg experiments, there is always an unignorable gap between the accuracy of the actual experimental results and the expected accuracy provided under the same conditions.

    For example: \begin{table}[] \begin{tabular}{ccc} ACC(Per-FedAvg) & MNIST & Synthetic \ MLR & 92.96% & 81.04% \ DNN & 93.01% & 76.79%
    \end{tabular} \end{table}

    Does this algorithm require special settings in actual experiments? Finally, sincerely thank you for your work.

    opened by Drizzlingg 5
  • Unable to generate non-iid MNIST Data

    Unable to generate non-iid MNIST Data

    Describe the bug While generation of the non-iid MNIST data, generate_niid_20users.py runs into an error

    To Reproduce Steps to reproduce the behavior:

    1. Go to 'data/Mnist'
    2. Run python generate_niid_20users.py

    Trace

    100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 38.74it/s]
    
    Numb samples of each label:
     [6903, 7877, 6990, 7141, 6824, 6313, 6876, 7293, 6825, 6958]
    idx 0        False
    1        False
    2        False
    3        False
    4         True
             ...  
    69995    False
    69996    False
    69997    False
    69998    False
    69999    False
    Name: class, Length: 70000, dtype: bool
    100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 135300.13it/s]
    --------------
    [0 1 2 3 4 5 6 7 8 9] [4 4 4 4 4 4 4 4 4 4]
    6903
    [2441, 1127, 1575, 1760]
    7877
    [2671, 1946, 1367, 1893]
    6990
    [2358, 1070, 841, 2721]
    7141
    [2630, 2202, 715, 1594]
    6824
    [1721, 1934, 1101, 2068]
    6313
    [2169, 1080, 1102, 1962]
    6876
    [2043, 1364, 1255, 2214]
    7293
    [2211, 2518, 598, 1966]
    6825
    [1506, 2480, 574, 2265]
    6958
    [1710, 1878, 1208, 2162]
    --------------
    [[2441, 1127, 1575, 1760], [2671, 1946, 1367, 1893], [2358, 1070, 841, 2721], [2630, 2202, 715, 1594], [1721, 1934, 1101, 2068], [2169, 1080, 1102, 1962], [2043, 1364, 1255, 2214], [2211, 2518, 598, 1966], [1506, 2480, 574, 2265], [1710, 1878, 1208, 2162]]
    [2441, 1127, 1575, 1760]
    [2671, 1946, 1367, 1893]
    [2358, 1070, 841, 2721]
    [2630, 2202, 715, 1594]
    [1721, 1934, 1101, 2068]
    [2169, 1080, 1102, 1962]
    [2043, 1364, 1255, 2214]
    [2211, 2518, 598, 1966]
    [1506, 2480, 574, 2265]
    [1710, 1878, 1208, 2162]
    [2441, 1127, 1575, 1760]
    [2671, 1946, 1367, 1893]
    [2358, 1070, 841, 2721]
    [2630, 2202, 715, 1594]
    [1721, 1934, 1101, 2068]
    [2169, 1080, 1102, 1962]
    [2043, 1364, 1255, 2214]
    [2211, 2518, 598, 1966]
    [1506, 2480, 574, 2265]
    [1710, 1878, 1208, 2162]
    [2441, 1127, 1575, 1760]
    [2671, 1946, 1367, 1893]
    [2358, 1070, 841, 2721]
    [2630, 2202, 715, 1594]
    [1721, 1934, 1101, 2068]
    [2169, 1080, 1102, 1962]
    [2043, 1364, 1255, 2214]
    [2211, 2518, 598, 1966]
    [1506, 2480, 574, 2265]
    [1710, 1878, 1208, 2162]
    [2441, 1127, 1575, 1760]
    [2671, 1946, 1367, 1893]
    [2358, 1070, 841, 2721]
    [2630, 2202, 715, 1594]
    [1721, 1934, 1101, 2068]
    [2169, 1080, 1102, 1962]
    [2043, 1364, 1255, 2214]
    [2211, 2518, 598, 1966]
    [1506, 2480, 574, 2265]
    [1710, 1878, 1208, 2162]
    --------------
    [2441, 2671, 2358, 2630, 1721, 2169, 2043, 2211, 1506, 1710, 1127, 1946, 1070, 2202, 1934, 1080, 1364, 2518, 2480, 1878, 1575, 1367, 841, 715, 1101, 1102, 1255, 598, 574, 1208, 1760, 1893, 2721, 1594, 2068, 1962, 2214, 1966, 2265, 2162]
      0%|                                                                                                                                           | 0/20 [00:00<?, ?it/s]value of L 0
    value of count 0
      0%|                                                                                                                                           | 0/20 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "generate_niid_20users.py", line 86, in <module>
        X[user] += mnist_data[l][idx[l]:num_samples].tolist()
      File "/Users/sharadchitlangia/miniconda3/envs/FL/lib/python3.6/site-packages/pandas/core/frame.py", line 2881, in __getitem__
        indexer = convert_to_index_sliceable(self, key)
      File "/Users/sharadchitlangia/miniconda3/envs/FL/lib/python3.6/site-packages/pandas/core/indexing.py", line 2132, in convert_to_index_sliceable
        return idx._convert_slice_indexer(key, kind="getitem")
      File "/Users/sharadchitlangia/miniconda3/envs/FL/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3159, in _convert_slice_indexer
        self._validate_indexer("slice", key.start, "getitem")
      File "/Users/sharadchitlangia/miniconda3/envs/FL/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 5000, in _validate_indexer
        self._invalid_indexer(form, key)
      File "/Users/sharadchitlangia/miniconda3/envs/FL/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3271, in _invalid_indexer
        f"cannot do {form} indexing on {type(self).__name__} with these "
    TypeError: cannot do slice indexing on Int64Index with these indexers [False] of type bool_
    
    opened by Sharad24 4
  • UserpFedMe class

    UserpFedMe class

    Hi , thanks for sharing your code. I have few questions. First is about the model update inside UserpFedMe class. Specifically I don't quite understand this part https://github.com/CharlieDinh/pFedMe/blob/96863e05e799fb6ad23248c63824b0381d2bec11/FLAlgorithms/users/userpFedMe.py#L58. What this line is doing is basically updating the personalized parameters (self.model.parameters()) to the final updated parameters of self.local_model. Can you please explain this further ? I don't know, maybe I have missed something, but the self.model.paramters() are already updated inside the inner optimization as done here https://github.com/CharlieDinh/pFedMe/blob/96863e05e799fb6ad23248c63824b0381d2bec11/FLAlgorithms/optimizers/fedoptimizer.py#L64, and I don't understand why we do need to set them back to final local weight! Second Question is I don't understand this https://github.com/CharlieDinh/pFedMe/blob/96863e05e799fb6ad23248c63824b0381d2bec11/FLAlgorithms/users/userbase.py#L39. It makes sense to update local_param but do not quite get why we need to update old_param. Please correct me if I am wrong but the only reason I can think of is because we are evaluating on the final aggregated model. Third question is here https://github.com/CharlieDinh/pFedMe/blob/96863e05e799fb6ad23248c63824b0381d2bec11/FLAlgorithms/servers/serverpFedMe.py#L54 where we train on all users not selected users. Why is that ? thanks much

    opened by mehdimashayekhi 4
  • A question about train_one_step() method.

    A question about train_one_step() method.

    https://github.com/CharlieDinh/pFedMe/blob/96863e05e799fb6ad23248c63824b0381d2bec11/FLAlgorithms/users/userperavg.py#L66 Hi CharlieDinh, I' m Sorry to take up your time. Can you tell me why self.get_next_test_batch() is used to do the update in the first step and then self.get_next_train_batch() is used to do the update in the second step? Can I do both updates with self.get_next_train_batch() ?

    opened by sshpark 3
  • Client's train method

    Client's train method

    Hi, I have a question about the train method in the clients. Normally, in each epoch the complete dataset is trained using batches, but I have seen in your code that in each epoch, only a single batch is trained. Is this correct? Best regards.

    error

    opened by josebummer 2
  • Why does FedAvg only train on 1 batch in each local epoch?

    Why does FedAvg only train on 1 batch in each local epoch?

    https://github.com/CharlieDinh/pFedMe/blob/5060b3415b57a0d1f556cb119ffacfa37f3fc4ba/FLAlgorithms/users/useravg.py#L32

    This disagrees with the original FederatedAveraging algorithm in (https://arxiv.org/pdf/1602.05629.pdf), where the local model should be trained on all batches in each local epoch.

    pFedMe and peravg also have the same behavior. Is there any reason for training with only 1 batch of data in each local epoch? Thanks!

    opened by mengcz13 2
  • Some questions about your peper and your code

    Some questions about your peper and your code

    Hi , I am very interested in your work. I have few questions.

    1. Does the model expressive power have great influence to pFedMe. You propose a personalized FL method so that clients with differen data statistics can train personalized models. pFedMe send the same parameter to each selected clients at the beginning of each glob_iteration, and each client begins to train their local model from this w. If the model has strong expressive power to fit most training data across many clients, will these clients still train their personalized models? Will pFedMe still outperform FedAvg?
    2. The code You create variables local_model, persionalized_model, persionalized_model_bar for personalized FL, but it seems that you have never used persionalized_model and persionalized_model_bar is just a copy of local_model. Is there anything I missed?
    opened by ty4b112 2
  • Question about pFedMeOptimizer.

    Question about pFedMeOptimizer.

    opened by siddharthdivi 2
  • modify niid

    modify niid

    Hello, Charlie! Thanks for the fantastic paper and codes. I am trying to generate some niid data with different properties. When running data/Mnist/generate_niid_20users.py, I came into several errors. There seem to be some issues with the name "idx". I redefined it and modified the index of mnist_data[l]. Also, the implementation of the distribution of labels to users has seemed to be slightly incorrect, leading to an error later.

    opened by DarlingHang 1
  • About the Hessian Approximation

    About the Hessian Approximation

    Dear authors, I have read the two implementations on pFedme and pFedAvg. One problem to me is that the both implementation missing the Hessian Approximation according to per-FedAvg research in the meta-update phase. Is this critical in the per-FedAvg and pFedme settings?

    opened by skydvn 0
  • A question in PerAvg algorithm

    A question in PerAvg algorithm

    Thank you for your code. I have a question about the code of PerAvg algorithm. When utilizing evaluate_one_step (in serverperavg.py) function to evaluate the performance of PerAvg, the function first executes for c in self.users: c.train_one_step() to train personalized models for one step. However, in the function train_one_step, it seems that it utilizes testing data to update the personalized model. Is it right? Source code: ``` def train_one_step(self): self.model.train() #step 1 X, y = self.get_next_testbatch() self.optimizer.zero_grad() output = self.model(X) loss = self.loss(output, y) loss.backward() self.optimizer.step() #step 2 X, y = self.get_nexttest_batch() self.optimizer.zero_grad() output = self.model(X) loss = self.loss(output, y) loss.backward() self.optimizer.step(beta=self.beta)

    Looking forward to your reply! Thank you!
    opened by BrightHaozi 1
  • Some mistakes in generating niid mnist data

    Some mistakes in generating niid mnist data

    Thanks to the author for modifying some old errors in the file two months ago, but there are still some errors that need attention.

    屏幕截图(3)_LI

    1. Line 39: "l = (user * NUM_USERS + j) % 10" should be changed to "l = (user * NUM_LABELS + j) % 10". The former will cause data allocation errors, and all users are assigned data with the same label.
    2. Line 81: The code to calculate "l" should be same in Line 39 and Line 81.
    3. Line 86: "if idx[l] + num_samples < len(mnist_data[l]):" the "<" should be modified to "<=". The former will cause the last part of the data set of each label to not be correctly assigned to the user. (This problem occurs because the author modified an old error on line 87, which has changed "mnist_data[l][idx[l]:num_samples]" to "mnist_data[l][idx[l]:idx[l]+num_samples]". )
    opened by tjuxiaofeng 5
Owner
Charlie Dinh
Ph.D. Candidate at the University of Sydney, Australia. Master of Data Science at Grenoble INP, France.
Charlie Dinh
PyTorch implementation of Federated Learning with Non-IID Data, and federated learning algorithms, including FedAvg, FedProx.

Federated Learning with Non-IID Data This is an implementation of the following paper: Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vik

Youngjoon Lee 48 Dec 29, 2022
TianyuQi 10 Dec 11, 2022
(Personalized) Page-Rank computation using PyTorch

torch-ppr This package allows calculating page-rank and personalized page-rank via power iteration with PyTorch, which also supports calculation on GP

Max Berrendorf 69 Dec 3, 2022
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
JudeasRx - graphical app for doing personalized causal medicine using the methods invented by Judea Pearl et al.

JudeasRX Instructions Read the references given in the Theory and Notation section below Fire up the Jupyter Notebook judeas-rx.ipynb The notebook dra

Robert R. Tucci 19 Nov 7, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

null 124 Dec 27, 2022
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

This is the official implementation of our paper Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR), which has been accepted by WSDM2022.

Yongchun Zhu 81 Dec 29, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Regulatory Instruments for Fair Personalized Pricing.

Fair pricing Source code for WWW 2022 paper Regulatory Instruments for Fair Personalized Pricing. Installation Requirements Linux with Python >= 3.6 p

Renzhe Xu 6 Oct 26, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 6, 2023
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data based on Pytorch Framework

VFedPCA+VFedAKPCA This is the official source code for the Paper: Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-

John 9 Sep 18, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

Med-AIR@CUHK 156 Dec 15, 2022
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 6, 2023
Plato: A New Framework for Federated Learning Research

a new software framework to facilitate scalable federated learning research.

System Group@Theory Lab 192 Jan 5, 2023
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 1, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 25 Nov 30, 2022