Breaching - Breaching privacy in federated learning scenarios for vision and text

Overview

Breaching - A Framework for Attacks against Privacy in Federated Learning

This PyTorch framework implements a number of gradient inversion attacks that breach privacy in federated learning scenarios, covering examples with small and large aggregation sizes and examples both vision and text domains.

This includes implementations of recent work such as:

But also a range of implementations of other attacks from optimization attacks (such as "Inverting Gradients" and "See through Gradients") to recent analytic and recursive attacks. Jupyter notebook examples for these attacks can be found in the examples/ folder.

Overview:

This repository implements two main components. A list of modular attacks under breaching.attacks and a list of relevant use cases (including server threat model, user setup, model architecture and dataset) under breaching.cases. All attacks and scenarios are highly modular and can be customized and extended through the configuration at breaching/config.

Installation

Either download this repository (including notebooks and examples) directly using git clone or install the python package via pip install breaching for easy access to key functionality.

Because this framework covers several use cases across vision and language, it also accumulates a kitchen-sink of dependencies. The full list of all dependencies can be found at environment.yml (and installed with conda by calling conda env create --file environment.yml ), but the full list of dependencies not installed by default. Install these as necessary (for example install huggingface packages only if you are interested in language applications).

You can verify your installation by running python simulate_breach.py dryrun=True. This tests the simplest reconstruction setting with a single iteration.

Usage

You can load any use case by

cfg_case = breaching.get_case_config(case="1_single_imagenet")
user, server, model, loss = breaching.cases.construct_case(cfg_case)

and load any attack by

cfg_attack = breaching.get_attack_config(attack="invertinggradients")
attacker = breaching.attacks.prepare_attack(model, loss, cfg_attack)

This is a good spot to print out an overview over the loaded threat model and setting, maybe you would want to change some settings?

breaching.utils.overview(server, user, attacker)

To evaluate the attack, you can then simulate an FL exchange:

shared_user_data, payloads, true_user_data = server.run_protocol(user)

And then run the attack (which consumes only the user update and the server state):

reconstructed_user_data, stats = attacker.reconstruct(payloads, shared_user_data)

For more details, have a look at the notebooks in the examples/ folder, the cmd-line script simulate_breach.py or the minimal examples in minimal_example.py and minimal_example_robbing_the_fed.py.

What is this framework?

This framework is modular collections of attacks against federated learning that breach privacy by recovering user data from their updates sent to a central server. The framework covers gradient updates as well as updates from multiple local training steps and evaluates datasets and models in language and vision. Requirements and variations in the threat model for each attack (such as the existence of labels or number of data points) are made explicit. Modern initializations and label recovery strategies are also included.

We especially focus on clarifying the threat model of each attack and constraining the attacker to only act based on the shared_user_data objects generated by the user. All attacks should be as use-case agnostic as possible based only on these limited transmissions of data and implementing a new attack should require no knowledge of any use case. Likewise implementing a new use case should be entirely separate from the attack portion. Everything is highly configurable through hydra configuration syntax.

What does this framework not do?

This framework focuses only on attacks, implementing no defense aside from user-level differential privacy and aggregation. We wanted to focus only on attack evaluations and investigate the questions "where do these attacks work currently", and "where are the limits". Accordingly, the FL simulation is "shallow". No model is actually trained here and we investigate fixed checkpoints (which can be generated somewhere else). Other great repositories, such as https://github.com/Princeton-SysML/GradAttack focus on defenses and their performance during a full simulation of a FL protocol.

Attacks

A list of all included attacks with references to their original publications can be found at examples/README.md.

Datasets

Many examples for vision attacks show ImageNet examples. For this to work, you need to download the ImageNet ILSVRC2012 dataset manually. However, almost all attacks require only the small validation set, which can be easily downloaded onto a laptop and do not look for the whole training set. If this is not an option for you, then the Birdsnap dataset is a reasonably drop-in replacement for ImageNet. By default, we further only show examples from ImageNetAnimals, which are the first 397 classes of the ImageNet dataset. This reduces the number of weird pictures of actual people substantially. Of course CIFAR10 and CIFAR100 are also around. For these vision datasets there are several options in the literature on how to partition them for a FL simulation. We implement a range of such partitions with data.partition, ranging from random (but replicable and with no repetitions of data across users), over balanced (separate classes equally across users) to unique-class (every user owns data from a single class). When changing the partition you might also have to adjust the number of expected clients data.default_clients (for example, for unique_class there can be only len(classes) many users).

For language data, you can load wikitext which we split into separate users on a per-article basis, or the stackoverflow and shakespeare FL datasets from tensorflow federated, which are already split into users (installing tensorflow-cpu is required for these tensorflow-federated datasets).

Further, nothing stops you from skipping the breaching.cases sub-module and using your own code to load a model and dataset. An example can be found in minimal_example.py.

Metrics

We implement a range of metrics which can be queried through breaching.analysis.report. Several metrics (such as CW-SSIM and R-PSNR) require additional packages to be installed - they will warn about this. For language data we hook into a range of huggingface metrics. Overall though, we note that most of these metrics give only a partial picture of the actual severity of a breach of privacy, and are best handled with care.

Additional Topics

Benchmarking

A script to benchmark attacks is included as benchmark_breaches.py. This script will iterate over the first valid num_trials users, attack each separately and average the resulting metrics. This can be useful for quantitative analysis of these attacks. The default case takes about a day to benchmark on a single GTX2080 GPU for optimization-based attacks, and less than 30 minutes for analytic attacks. Using the default scripts for benchmarking and cmd-line executes also includes a bunch of convenience based mostly on hydra. This entails the creation of separate sub-folders for each experiment in outputs/. These folders contain logs, metrics and optionally recovered data for each run. Summary tables are written to tables/.

System Requirements

All attacks can be run on both CPU/GPU (any torch.device actually). However, the optimization-based attacks are very compute intensive and using a GPU is highly advised. The other attacks are cheap enough to be run on CPUs (The Decepticon attack for example does most of the heavy lifting in assignment problems on CPU anyway, for example).

Options

It is probably best to have a look into breaching/config to see all possible options.

Citation

For now, please cite the respective publications for each attack and use case.

License

We integrate several snippets of code from other repositories and refer to the licenses included in those files for more info. We're especially thankful for related projects such as https://www.tensorflow.org/federated, https://github.com/NVlabs/DeepInversion, https://github.com/JunyiZhu-AI/R-GAP, https://github.com/facebookresearch/functorch, https://github.com/ildoonet/pytorch-gradual-warmup-lr and https://github.com/nadavbh12/VQ-VAE from which we incorporate components.

For the license of our code, refer to LICENCE.md.

Authors

This framework was built by me (Jonas Geiping), Liam Fowl and Yuxin Wen while working at the University of Maryland, College Park.

Contributing

If you have an attack that you are interested in implementing in this framework, or a use case that is interesting to you, don't hesitate to contact us or open a pull-request.

Contact

If you have any questions, also don't hesitate to open an issue here on github or write us an email.

You might also like...
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

FedScale: Benchmarking Model and System Performance of Federated Learning
FedScale: Benchmarking Model and System Performance of Federated Learning

FedScale: Benchmarking Model and System Performance of Federated Learning (Paper) This repository contains scripts and instructions of building FedSca

A project which aims to protect your privacy using inexpensive hardware and easily modifiable software
A project which aims to protect your privacy using inexpensive hardware and easily modifiable software

Protecting your privacy using an ESP32, an IR sensor and a python script This project, which I personally call the "never-gonna-catch-me-in-the-act-ev

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

An open framework for Federated Learning.
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Official code implementation for
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization This is the PyTorch implemention of our paper FedBN: Federated Learning on

[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space
[CVPR'21] FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space

FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space by Quande Liu, Cheng Chen, Ji

Comments
  • adding the GroupRegistration regularization term for

    adding the GroupRegistration regularization term for "See through gradients" attack

    Problem and context

    As I am working on extending gradient inversion attacks, I came across this wonderful library. In an attempt to reproduce Yin. et al paper, I found out about the missing regularization term (as per title) in the final Notes of the breaching/examples/See through gradients [...].ipynb. I would like to try and reproduce the results of Yin et al. in order to provide baselines for comparison against other regularization metrics. The main obstacle in implementing this term seems to be a cluttered description of it in Section 3.4 of the above mentioned paper.

    Steps towards solution

    Regardless of the actual value of \alpha_{group} (not disclosed by the authors, as far as I know) I believe a possible implementation of the GroupRegistration regularization term can be achieved in the following few steps:

    1. Create a dummy image x_g, for all g in G
    2. Compute the per-pixel average over |G| and call it target image x_t
    3. Compute the registration F(x_g, x_t), i.e. the linear transformation that matches certain features of x_g with x_t. Do it for every g in G. The feature matching/transformation function F is based on RANSAC-flow.
    4. Average all the F(x_g, x_t) over g in G and call it E[x_g]
    5. Compute the 2-norm of the difference between x_g and E[x_g].

    To my understanding, this is the meaning of Section 3.4 and the plot in Figure 3 of the above mentioned paper.

    Additional comments

    My research would benefit from having this component implemented, and I believe it could have a broader impact in giving the possibility to reproduce one of the SOTA results in gradient inversion attacks to other researchers as well. For this reason I would like to take on this issue. Disclaimer: This would be my first contribution to a public, research repository.

    opened by philipjk 2
  • Unexpected change to server model in benchmark

    Unexpected change to server model in benchmark

    Hi

    In breaching.analysis.report() function, it directly uses the server.model to load the parameters and buffers. However, this may cause wrong modification to server.model and influences the next run in benchmark. It may be possible to resolve the problem by adding model=copy.deepcopy(model) before model.to(**setup)

        model = copy.deepcopy(model)
        model.to(**setup) 
    
    opened by LuckMonkeys 1
  • Wrong index of buffers in analysis.py

    Wrong index of buffers in analysis.py

    When set user.provide_buffers=True, the user adds model buffers in true_user_data["buffers"]->[Tensors]. However, in analysis.py, it iterates through true_user_data["buffers"][idx]->Tensor. The [idx] does not needed and should be removed. https://github.com/JonasGeiping/breaching/blob/85c37cf2d45dc291edd98ae5b386e0a3dafeceec/breaching/analysis/analysis.py#L64

    for buffer, user_state in zip(model.buffers(), true_user_data["buffers"]):

    opened by LuckMonkeys 1
Owner
Jonas Geiping
Researching optimization problems in machine learning with security applications.
Jonas Geiping
TianyuQi 10 Dec 11, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 6, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 25 Nov 30, 2022
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

null 129 Dec 30, 2022
MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL [ Documentation | Demo Video ] MetaDrive is a driving simulator with the following

DeciForce: Crossroads of Machine Perception and Autonomy 276 Jan 4, 2023
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 5, 2023
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches This document describes how to install and use CRISCE (CRItical

Chair of Software Engineering II, Uni Passau 2 Feb 9, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 1, 2022
Everything you want about DP-Based Federated Learning, including Papers and Code. (Mechanism: Laplace or Gaussian, Dataset: femnist, shakespeare, mnist, cifar-10 and fashion-mnist. )

Differential Privacy (DP) Based Federated Learning (FL) Everything about DP-based FL you need is here. (所有你需要的DP-based FL的信息都在这里) Code Tip: the code o

wenzhu 83 Dec 24, 2022