This framework implements the data poisoning method found in the paper Adversarial Examples Make Strong Poisons

Overview

Adversarial poison generation and evaluation.

This framework implements the data poisoning method found in the paper Adversarial Examples Make Strong Poisons, authored by Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein.

We use and adapt code from the publicly available Witches' Brew (Geiping et al.) github repository.

Dependencies:

  • PyTorch => 1.6.*
  • torchvision > 0.5.*

USAGE:

The cmd-line script anneal.py is responsible for generating poisons.

Other possible arguments for poison generation can be found under village/options.py. Many of these arguments do not apply to our implementation and are relics from the github repository which we adapted (see above).

Teaser

CIFAR-10 Example

Generation

To poison CIFAR-10 with our most powerful attack (class targeted), for a ResNet-18 with epsilon bound 8, use python anneal.py --net ResNet18 --recipe targeted --eps 8 --budget 1.0 --target_criterion reverse_xent --save poison_dataset_batched --poison_path /path/to/save/poisons --attackoptim PGD

  • Note 1: this will generate poisons according to a simple label permutation found in poison_generation/shop/forgemaster_targeted.py defined in the _label_map method. One can easily modify this to any permutation on the label space.

  • Note 2: this could take several hours depending on the GPU used. To decrease the time, use the flag --restarts 1. This will decrease the time required to craft the poisons, but also potentially decrease the potency of the poisons.

Generating poisons with untargeted attacks is more brittle, and the success of the generated poisons vary depending on the poison initialization much more than the targeted attacks. Because generating multiple sets of poisons can take a longer time, we have included an anonymous google drive link to one of our best untargeted dataset for CIFAR-10. This can be evaluated in the same way as the poisons generated with the above command, simply download the zip file from here and extract the data.

Evaluation

You can then evaluate the poisons you generated (saved in poisons) by running python poison_evaluation/main.py --load_path /path/to/your/saved/poisons --runs 1

Where --load_path specifies the path to the generated poisons, and --runs specifies how many runs to evaluate the poisons over. This will test on a ResNet-18, but this can be changed with the --net flag.

ImageNet

ImageNet poisons can be optimized in a similar way, although it requires much more time and resources to do so. If you would like to attempt this, you can use the included info.pkl file. This splits up the ImageNet dataset into subsets of 25k that can then be crafted one at a time (52 subsets in total). Each subset can take anywhere from 1-3 days to craft depending on your GPU resources. You also need >200gb of storage to store the generated dataset.

A command for crafting on one such subset is:

python anneal.py --recipe targeted --eps 8 --budget 1.0 --dataset ImageNet --pretrained --target_criterion reverse_xent --poison_partition 25000 --save poison_dataset_batched --poison_path /path/to/save/poisons --restarts 1 --resume /path/to/info.pkl --resume_idx 0 --attackoptim PGD

You can generate poisons for all of ImageNet by iterating through all the indices (0,1,2,...,51) of the ImageNet subsets.

  • Note: we are working to produce/run a deterministic seeded version of the above ImageNet generation and we will update the code appropriately.
You might also like...
code for paper
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

This repository holds code and data for our PETS'22 article 'From
This repository holds code and data for our PETS'22 article 'From "Onion Not Found" to Guard Discovery'.

From "Onion Not Found" to Guard Discovery (PETS'22) This repository holds the code and data for our PETS'22 paper titled 'From "Onion Not Found" to Gu

A certifiable defense against adversarial examples by training neural networks to be provably robust
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark Yong

Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

Model search is a framework that implements AutoML algorithms for model architecture search at scale
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Comments
  • Poisoned dataset not effective

    Poisoned dataset not effective

    Dear authors,

    We trained a DNN model on the untargeted Cifar-10 dataset you provided. However, the poisoning effect seems not to be as powerful as you mentioned--the evaluation accuracy on a clean test set does not decrease too much (no more than -20% on our model).

    We failed to use your code for evaluation because of same execution errors on windows. Would you advise how to reproduce the result? thank you!

    opened by WeiqiPeng0 4
  • Is

    Is "differentiable data augmentation" used or not for training the crafting network on CIFAR-10?

    Nice to read this interesting work! I would like to ask if the so-called "differentiable data augmentation" has also been used for training the crafting network on CIFAR-10. If I understand it correctly from the paper, such an augmentation has only been used for crafting the poisoning perturbations (with PGD). However, as can be seen from the "options.py", the ''--noaugment'' is set to 'false' by default (i.e. 'store_true'), which means the augmentation has also been used for training the crafting network.

    opened by ZhengyuZhao 1
Owner
null
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

null 25 Dec 8, 2022
LBK 35 Dec 26, 2022
LBK 26 Dec 28, 2022
TransGAN: Two Transformers Can Make One Strong GAN

[Preprint] "TransGAN: Two Transformers Can Make One Strong GAN", Yifan Jiang, Shiyu Chang, Zhangyang Wang

VITA 1.5k Jan 7, 2023
This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

TransUNet This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation Usage

null 1.4k Jan 4, 2023
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 9, 2023
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 4, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 7, 2023