Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Overview

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Above is an adversarial example: the slightly perturbed image of the cat fools an InceptionV3 classifier into classifying it as "guacamole". Such "fooling images" are easy to synthesize using gradient descent (Szegedy et al. 2013).

In our recent paper, we evaluate the robustness of nine papers accepted to ICLR 2018 as non-certified white-box-secure defenses to adversarial examples. We find that seven of the nine defenses provide a limited increase in robustness and can be broken by improved attack techniques we develop.

Below is Table 1 from our paper, where we show the robustness of each accepted defense to the adversarial examples we can construct:

Defense Dataset Distance Accuracy
Buckman et al. (2018) CIFAR 0.031 (linf) 0%*
Ma et al. (2018) CIFAR 0.031 (linf) 5%
Guo et al. (2018) ImageNet 0.05 (l2) 0%*
Dhillon et al. (2018) CIFAR 0.031 (linf) 0%
Xie et al. (2018) ImageNet 0.031 (linf) 0%*
Song et al. (2018) CIFAR 0.031 (linf) 9%*
Samangouei et al. (2018) MNIST 0.005 (l2) 55%**
Madry et al. (2018) CIFAR 0.031 (linf) 47%
Na et al. (2018) CIFAR 0.015 (linf) 15%

(Defenses denoted with * also propose combining adversarial training; we report here the defense alone. See our paper, Section 5 for full numbers. The fundemental principle behind the defense denoted with ** has 0% accuracy; in practice defense imperfections cause the theoretically optimal attack to fail, see Section 5.4.2 for details.)

The only defense we observe that significantly increases robustness to adversarial examples within the threat model proposed is "Towards Deep Learning Models Resistant to Adversarial Attacks" (Madry et al. 2018), and we were unable to defeat this defense without stepping outside the threat model. Even then, this technique has been shown to be difficult to scale to ImageNet-scale (Kurakin et al. 2016). The remainder of the papers (besides the paper by Na et al., which provides limited robustness) rely either inadvertently or intentionally on what we call obfuscated gradients. Standard attacks apply gradient descent to maximize the loss of the network on a given image to generate an adversarial example on a neural network. Such optimization methods require a useful gradient signal to succeed. When a defense obfuscates gradients, it breaks this gradient signal and causes optimization based methods to fail.

We identify three ways in which defenses cause obfuscated gradients, and construct attacks to bypass each of these cases. Our attacks are generally applicable to any defense that includes, either intentionally or or unintentionally, a non-differentiable operation or otherwise prevents gradient signal from flowing through the network. We hope future work will be able to use our approaches to perform a more thorough security evaluation.

Paper

Abstract:

We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

For details, read our paper.

Source code

This repository contains our instantiations of the general attack techniques described in our paper, breaking 7 of the ICLR 2018 defenses. Some of the defenses didn't release source code (at the time we did this work), so we had to reimplement them.

Citation

@inproceedings{obfuscated-gradients,
  author = {Anish Athalye and Nicholas Carlini and David Wagner},
  title = {Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples},
  booktitle = {Proceedings of the 35th International Conference on Machine Learning, {ICML} 2018},
  year = {2018},
  month = jul,
  url = {https://arxiv.org/abs/1802.00420},
}
You might also like...
Vulnerability Scanner & Auto Exploiter You can use this tool to check the security by finding the vulnerability in your website or you can use this tool to Get Shells
Vulnerability Scanner & Auto Exploiter You can use this tool to check the security by finding the vulnerability in your website or you can use this tool to Get Shells

About create a target list or select one target, scans then exploits, done! Vulnnr is a Vulnerability Scanner & Auto Exploiter You can use this tool t

Tools to make working the Arch Linux Security Tracker easier

This is a collection of Python scripts to make working with the Arch Linux Security Tracker easier.

Writeups for wtf-CTF hosted by Manipal Information Security Team as part of Techweek2021- INCOGNITO
Writeups for wtf-CTF hosted by Manipal Information Security Team as part of Techweek2021- INCOGNITO

wtf-CTF_Writeups Table of Contents Table of Contents Crypto Misc Reverse Pwn Web Crypto wtf_Bot Author: Madjelly Join the discord server!You know how

GitHub Advance Security Compliance Action

advanced-security-compliance This Action was designed to allow users to configure their Risk threshold for security issues reported by GitHub Code Sca

evtx-hunter helps to quickly spot interesting security-related activity in Windows Event Viewer (EVTX) files.
evtx-hunter helps to quickly spot interesting security-related activity in Windows Event Viewer (EVTX) files.

Introduction evtx-hunter helps to quickly spot interesting security-related activity in Windows Event Viewer (EVTX) files. It can process a high numbe

Set the draft security HTTP header Permissions-Policy (previously Feature-Policy) on your Django app.

django-permissions-policy Set the draft security HTTP header Permissions-Policy (previously Feature-Policy) on your Django app. Requirements Python 3.

EMBArk - The firmware security scanning environment

Embark is being developed to provide the firmware security analyzer emba as a containerized service and to ease accessibility to emba regardless of system and operating system.

GitLab CI security tools runner
GitLab CI security tools runner

Common Security Pipeline Описание проекта: Данный проект является вариантом реализации DevSecOps практик, на базе: GitLab DefectDojo OpenSouce tools g

Owner
Anish Athalye
grad student @mit-pdos
Anish Athalye
OLOP: One-Line & Obfuscated Python

OLOP: One-Line & Obfuscated Python This repository contains useful python modules for one-line and obfuscated python. pip install olop-ShadowLugia650

null 1 Jan 9, 2022
A python script to turn Ubuntu Desktop in a one stop security platform. The InfoSec Fortress installs the packages,tools, and resources to make Ubuntu 20.04 capable of both offensive and defensive security work.

infosec-fortress A python script to turn Ubuntu Desktop into a strong DFIR/RE System with some teeth (Purple Team Ops)! This is intended to create a s

James 41 Dec 30, 2022
Security-TXT is a python package for retrieving, parsing and manipulating security.txt files.

Security-TXT is a python package for retrieving, parsing and manipulating security.txt files.

Frank 3 Feb 7, 2022
Security audit Python project dependencies against security advisory databases.

Security audit Python project dependencies against security advisory databases.

null 52 Dec 17, 2022
RedTeam-Security - In this repo you will get the information of Red Team Security related links

OSINT Passive Discovery Amass - https://github.com/OWASP/Amass (Attack Surface M

Abhinav Pathak 5 May 18, 2022
Bandit is a tool designed to find common security issues in Python code.

A security linter from PyCQA Free software: Apache license Documentation: https://bandit.readthedocs.io/en/latest/ Source: https://github.com/PyCQA/ba

Python Code Quality Authority 4.8k Dec 31, 2022
Safety checks your installed dependencies for known security vulnerabilities

Safety checks your installed dependencies for known security vulnerabilities. By default it uses the open Python vulnerability database Safety DB, but

pyup.io 1.4k Dec 30, 2022
A Static Analysis Tool for Detecting Security Vulnerabilities in Python Web Applications

This project is no longer maintained March 2020 Update: Please go see the amazing Pysa tutorial that should get you up to speed finding security vulne

null 2.1k Dec 25, 2022
HTTP security headers for Flask

Talisman: HTTP security headers for Flask Talisman is a small Flask extension that handles setting HTTP headers that can help protect against a few co

Google Cloud Platform 854 Dec 30, 2022