[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

Related tags

Deep Learning FedBN
Overview

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

This is the PyTorch implemention of our paper FedBN: Federated Learning on Non-IID Features via Local Batch Normalization by Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp and Qi Dou

Abstract

The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data and hence improving data privacy. In most cases, the assumption of independent and identically distributed samples across local clients does not hold for federated learning setups. Under this setting, neural network training performance may vary significantly according to the data distribution and even hurt training convergence. Most of the previous work has focused on a difference in the distribution of labels. Unlike those settings, we address an important problem of FL, e.g., different scanner/sensors in medical imaging, different scenery distribution in autonomous driving (highway vs. city), where local clients may store examples with different marginal or conditional feature distributions compared to other nodes, which we denote as feature shift non-iid. In this work, we propose an effective method that uses local batch normalization to alleviate the feature shift before averaging models. The resulting scheme, called FedBN, outperforms both classical FedAvg, as well as the state-of-the-art for non-iid data (FedProx) on our extensive experiments. These empirical results are supported by a convergence analysis that shows in a simplified setting that FedBN has a faster convergence rate in expectation than FedAvg.

avatar

Usage

Setup

pip

See the requirements.txt for environment configuration.

pip install -r requirements.txt

conda

We recommend using conda to quick setup the environment. Please use the following commands.

conda env create -f environment.yaml
conda activate fedbn

Dataset & Pretrained Modeel

Benchmark(Digits)

  • Please download our pre-processed datasets here, put under data/ directory and perform following commands:
    cd ./data
    unzip digit_dataset.zip
  • Please download our pretrained model here and put under snapshots/ directory, perform following commands:
    cd ./snapshots
    unzip digit_model.zip

office-caltech10

  • Please download our pre-processed datasets here, put under data/ directory and perform following commands:
    cd ./data
    unzip office_caltech_10_dataset.zip
  • Please download our pretrained model here and put under snapshots/ directory, perform following commands:
    cd ./snapshots
    unzip office_caltech_10_model.zip

DomainNet

  • Please first download our splition here, put under data/ directory and perform following commands:
    cd ./data
    unzip domainnet_dataset.zip
  • then download dataset including: Clipart, Infograph, Painting, Quickdraw, Real, Sketch, put under data/DomainNet directory and unzip them.
    cd ./data/DomainNet
    unzip [filename].zip
  • Please download our pretrained model here and put under snapshots/ directory, perform following commands:
    cd ./snapshots
    unzip domainnet_model.zip

Train

Federated Learning

Please using following commands to train a model with federated learning strategy.

  • --mode specify federated learning strategy, option: fedavg | fedprox | fedbn
cd federated
# benchmark experiment
python fed_digits.py --mode fedbn

# office-caltech-10 experiment
python fed_office.py --mode fedbn

# DomaiNnet experiment
python fed_domainnet.py --mode fedbn

SingleSet

Please using following commands to train a model using singleset data.

  • --data specify the single dataset
cd singleset 
# benchmark experiment, --data option: svhn | usps | synth | mnistm | mnist
python single_digits.py --data svhn

# office-caltech-10 experiment --data option: amazon | caltech | dslr | webcam
python single_office.py --data amazon

# DomaiNnet experiment --data option: clipart | infograph | painting | quickdraw | real | sketch
python single_domainnet.py --data clipart

Test

cd federated
# benchmark experiment
python fed_digits.py --mode fedbn --test

# office-caltech-10 experiment
python fed_office.py --mode fedbn --test

# DomaiNnet experiment
python fed_domainnet.py --mode fedbn --test

Citation

If you find the code and dataset useful, please cite our paper.

@inproceedings{
li2021fedbn,
title={Fed{\{}BN{\}}: Federated Learning on Non-{\{}IID{\}} Features via Local Batch Normalization},
author={Xiaoxiao Li and Meirui JIANG and Xiaofei Zhang and Michael Kamp and Qi Dou},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=6YEQUn0QICG}
}
Comments
  • digits experiment (precision)

    digits experiment (precision)

    Hi, I came across your work. It was an interesting idea, but it ran into some problems during the experiment.

    When I clone your code, run the command below. The accuracy in the paper cannot be obtained.

    # benchmark experiment
    python fed_digits.py --mode fedbn --iters 300
    

    This is my best experiment.

    ============ Train epoch 208 ============
     MNIST      | Train Loss: 0.0003 | Train Acc: 1.0000
     SVHN       | Train Loss: 0.0010 | Train Acc: 1.0000
     USPS       | Train Loss: 0.0004 | Train Acc: 1.0000
     SynthDigits| Train Loss: 0.0007 | Train Acc: 1.0000
     MNIST-M    | Train Loss: 0.0007 | Train Acc: 1.0000
     MNIST      | Test  Loss: 0.0985 | Test  Acc: 0.9689
     SVHN       | Test  Loss: 0.9124 | Test  Acc: 0.7232
     USPS       | Test  Loss: 0.1033 | Test  Acc: 0.9683
     SynthDigits| Test  Loss: 0.5342 | Test  Acc: 0.8359
     MNIST-M    | Test  Loss: 0.6474 | Test  Acc: 0.7904
    

    Did I miss any details?

    My platform

    Ubuntu 20.04
    GeForce RTX 2080 Ti
    Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
    

    Thanks for taking the time to help!

    opened by wnma3mz 2
  • About the experiments

    About the experiments

    Hi, I have read your paper and code and found it to be an interesting work!

    I have a question here, that is, why not test the performance of server model in the experiments?
    If only the performance of client model is needed, it seems that it would be better train the model locally?

    opened by xutting123 1
  • Run ./utils/data_preprocess.py python, an error has occurred:requests.exceptions.SSLError

    Run ./utils/data_preprocess.py python, an error has occurred:requests.exceptions.SSLError

    When I run ./utils/data_preprocess.py python An error has occurred:requests.exceptions.SSLError: HTTPSConnectionPool(host='www.baidu.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))

    How to solve this error?

    opened by aidenz0 0
  • The train/val split in domainnet

    The train/val split in domainnet

    we find that the train sets are kept the same size on all domains as the 'min_data_len' and we think there is a mistake: https://github.com/med-air/FedBN/blob/1caf83cf95e6859b712e1e80c8264009a5067162/singleset/single_domainnet.py#L120

    opened by FrankZhangRp 1
  • can't install requirements.txt

    can't install requirements.txt

    hello,when I ran pip install -r requirements.txt there was an error show up and I can't do anything moreover (here is the screenshot) what could I do ? err

    opened by b856741 3
Owner
Med-AIR@CUHK
Medical Image Analysis, Artificial Intelligence, Robotics
Med-AIR@CUHK
ICLR21 Tent: Fully Test-Time Adaptation by Entropy Minimization

⛺️ Tent: Fully Test-Time Adaptation by Entropy Minimization This is the official project repository for Tent: Fully-Test Time Adaptation by Entropy Mi

Dequan Wang 204 Dec 25, 2022
TianyuQi 10 Dec 11, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 9, 2022
[CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang

BNN - BN = ? Training Binary Neural Networks without Batch Normalization Codes for this paper BNN - BN = ? Training Binary Neural Networks without Bat

VITA 40 Dec 30, 2022
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"

Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects

Sitao Xiang 182 Sep 6, 2021
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 1, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization

Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization Official PyTorch implementation for our URST (Ultra-Resolution Sty

czczup 148 Dec 27, 2022
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 6, 2023
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Quande Liu 178 Jan 6, 2023
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 44 Sep 15, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

null 37 Dec 3, 2022
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Tencent YouTu Research 50 Dec 3, 2022
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)

MASTER-PyTorch PyTorch reimplementation of "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021). This projec

Wenwen Yu 255 Dec 29, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

null 143 Dec 28, 2022
This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR

This project is a re-implementation of MASTER: Multi-Aspect Non-local Network for Scene Text Recognition by MMOCR,which is an open-source toolbox based on PyTorch. The overall architecture will be shown below.

Jianquan Ye 82 Nov 17, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 7, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 4, 2020