Federated Learning Based on Dynamic Regularization

Related tags

Deep Learning FedDyn
Overview

Federated Learning Based on Dynamic Regularization

This is implementation of Federated Learning Based on Dynamic Regularization.

Requirements

Please install the required packages. The code is compiled with Python 3.7 dependencies in a virtual environment via

pip install -r requirements.txt

Instructions

Example codes to run FedDyn as well as baseline methods (FedAvg, FedProx and SCAFFOLD) with the synthetic dataset and CIFAR10 is given in example_code_synthetic.py and example_code_cifar10.py.

Generate IID and Dirichlet distributions on various datasets:

CIFAR-10 IID, 100 partitions, balanced data

data_obj = DatasetObject(dataset='CIFAR10', n_client=100, rule='iid', unbalanced_sgm=0)

CIFAR-10 Dirichlet (0.6), 100 partitions, balanced data

data_obj = DatasetObject(dataset='CIFAR10', n_client=100, unbalanced_sgm=0, rule='Dirichlet', rule_arg=0.6)

EMNIST needs to be downloaded from this link.
Shakespeare is generated by using LEAF.

The example codes construct the federated datasets, train methods and plot convergence curve.

Citation

@inproceedings{
acar2021federated,
title={Federated Learning Based on Dynamic Regularization},
author={Durmus Alp Emre Acar and Yue Zhao and Ramon Matas and Matthew Mattina and Paul Whatmough and Venkatesh Saligrama},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=B7v4QMR6Z9w}
}
You might also like...
Code for ACL2021 paper Consistency Regularization for Cross-Lingual Fine-Tuning.

xTune Code for ACL2021 paper Consistency Regularization for Cross-Lingual Fine-Tuning. Environment DockerFile: dancingsoul/pytorch:xTune Install the f

IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation (ICCV 2021)
Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation (ICCV 2021)

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation Home | PyTorch BigGAN Discovery | TensorFlow ProGAN Regulariza

Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Adaout is a practical and flexible regularization method with high generalization and interpretability

Adaout Adaout is a practical and flexible regularization method with high generalization and interpretability. Requirements python 3.6 (Anaconda versi

Comments
  • loss for feddyn

    loss for feddyn

    Current way loss of feddyn is calculated (in utils_general):

    loss_algo = alpha_coef * torch.sum(local_par_list * (-avg_mdl_param + local_grad_vector))
    

    I think what you wanted to do (unless I'm missing something) is:

    loss_algo = alpha_coef * torch.sum(local_par_list * (0.5*local_par_list - avg_mdl_param + local_grad_vector))
    

    Maybe is much clearer (match paper equation) to write something like:

    loss_algo = alpha_coef * torch.sum(local_par_list *  local_grad_vector) + 0.5*alpha_coef*(local_par_list - avg_mdl_param)**2
    

    Feel free to correct me if I'm wrong.

    opened by calvin89029 6
  • About the server state h?

    About the server state h?

    Thanks for your awesome works. I'm trying to re-implement your algorithm. But when I read the source code, I cannot find where is the server state $h^t$.

    If I understand correctly, in this line https://github.com/alpemreacar/FedDyn/blob/48a19fac440ef079ce563da8e0c2896f8256fef9/utils_methods.py#L389, the local_param_list_curr is the local grad $\nabla L_k (\theta_k^t)$, cld_mdl_param_tensor is the global model parameter $\theta^{t-1}$, In this line https://github.com/alpemreacar/FedDyn/blob/48a19fac440ef079ce563da8e0c2896f8256fef9/utils_methods.py#L397, the cld_mdl_param is the new global model parameter $\theta^t$, and it seems that the np.mean(local_param_list, axis=0) is the $-\frac{1}{\alpha} h^t$.

    Thus, the code means that $h^t = -\frac{\alpha}{m} \sum_{k \in \left[ m \right] } \nabla L_k (\theta_k) $, in which here I ignore the $t$ because the summation is conducted on all clients, we cannot know the timestamp of $\nabla L_k (\theta_k^t)$ because of randomly client selection.

    So here the actual $h^t$ is not strictly calculated as $h^t = h^{t-1} - \alpha \frac{1}{m} (\sum_{k\in {P}_t} \theta_k^t - \theta^{t-1} )$.

    opened by wizard1203 1
  • a minor mistake in calculation of the loss of FedProx

    a minor mistake in calculation of the loss of FedProx

    Current way loss of fedprox is calculated (in utils_general):

    loss_algo = mu/2 * torch.sum(local_par_list * local_par_list)
    loss_algo = -mu * torch.sum(local_par_list * avg_model_param_)
    

    I think what you wanted to do (unless I'm missing something) is:

    loss_algo = mu/2 * torch.sum(local_par_list * local_par_list)
    loss_algo = loss_algo - mu * torch.sum(local_par_list * avg_model_param_)
    

    Cheers, F. Varno

    opened by fvarno 1
Owner
null
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 1, 2023
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

null 10 Oct 7, 2022
Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research) This is a PyTorch implementation of the CoMatch paper [B

Salesforce 107 Dec 14, 2022
[ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma

Kaidi Cao 29 Oct 20, 2022
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Bo Sun 132 Nov 28, 2022
Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

wenzhu 83 Dec 24, 2022
Consistency Regularization for Adversarial Robustness

Consistency Regularization for Adversarial Robustness Official PyTorch implementation of Consistency Regularization for Adversarial Robustness by Jiho

null 40 Dec 17, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

SelfReg PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.0

null 64 Dec 16, 2022