SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

Related tags

Deep Learning scaaml
Overview

SCAAML: Side Channel Attacks Assisted with Machine Learning

SCAAML banner

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Available compoments

  • scaaml/: The SCAAML framework code. Its used by the various tools.
  • scaaml_intro/: A Hacker Guide To Deep Learning Based Side Channel Attacks. Code, dataset and models used in our step by step tutorial on how to use deep-learning to perform AES side-channel attacks in practice.

Install

Dependencies

To use SCAAML you need to have a working version of TensorFlow 2.x and a version of Python >=3.6

SCAAML framework install

  1. Clone the repository: git clone github.com/google/scaaml/
  2. Install the SCAAML package: python setup.py develop

Dataset and models

Every SCAAML component rely on a datasets and optional models that you will need to download in the component directory. The link to download those are available in the components specific README.md. Simply click on the directory representing the component of your choice, or the link to the component in the list above.

Publications & Citation

Here is the list of publications and talks related to SCAAML. If you use any of its codebase, models or datasets please cite:

@online{bursztein2019scaaml,
  title={SCAAML:  Side Channel Attacks Assisted with Machine Learning},
  author={Bursztein, Elie and others},
  year={2019},
  publisher={GitHub},
  url={https://github.com/google/scaaml},
}

Additionally please also cite the talks and publications that are the most relevant to your work, so reader can quickly find the right information. Last but not least, you are more than welcome to add your publication/talk to the list below by making a pull request 😊 .

SCAAML AES tutorial

DEF CON talk that provides a practical introduction to AES deep-learning based side-channel attacks

@inproceedings{burzteindc27,
title={A Hacker Guide To Deep Learning Based Side Channel Attacks},
author={Elie Bursztein and Jean-Michel Picod},
booktitle ={DEF CON 27},
howpublished = {\url{https://elie.net/talk/a-hackerguide-to-deep-learning-based-side-channel-attacks/}}
year={2019},
editor={DEF CON}
}

Disclaimer

This is not an official Google product.

Comments
  • Pylint with Python3.10

    Pylint with Python3.10

    Pylint also with Python3.10. Other workflows (mypy, pytest...) are still run against Python3.9. Suggested at https://github.com/google/scaaml/pull/87#discussion_r917855371

    @jmichelp Should we also bump Python version for other workflows?

    opened by kralka 5
  • Fix typos

    Fix typos

    Python notebooks contain typos.

    There are some typos which are a part of the api or need more attention (bibtex name burzteindc):

    burzteindc chipwispher comparaison filepattern

    scaaml.aes still contains non-word variables.

    opened by kralka 2
  • Split workflows and adds initial support for pytest and mypy

    Split workflows and adds initial support for pytest and mypy

    Part of issue #38

    Because we're using google codestyle in yapf, pylint configuration file has been updated to the matching one in Google public style guide: https://github.com/google/styleguide/blob/gh-pages/pyguide.md

    Rationale behind splitting workflows is the granularity of branch protection which is at the yaml file level. It's also not so bad for maintenance considering it's a python project.

    opened by jmichelp 2
  • Provide helpful debug when Unicorn is not installed

    Provide helpful debug when Unicorn is not installed

    Currently the leak mapping code don't work when unicorn is not installed but don't crash and as a result silently fail

    [Emulating target]
    Starting emulation
    Traceback (most recent call last):
      File "_ctypes/callbacks.c", line 234, in 'calling callback function'
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/unicorn-1.0.2rc4-py3.6.egg/unicorn/unicorn.py", line 479, in _hookcode_cb
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/devices/rainbow/generics/cortexm.py", line 109, in block_handler
        self.base_block_handler(address)
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/devices/rainbow/rainbow.py", line 414, in base_block_handler
        r = self.stubbed_functions[f](self)
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/chipwhisperer_leakage_automaton.py", line 194, in trigger_high
        self.trigger_hook = e.emu.hook_add(uc.UC_HOOK_MEM_WRITE,
    NameError: name 'uc' is not defined
    Traceback (most recent call last):
      File "_ctypes/callbacks.c", line 234, in 'calling callback function'
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/unicorn-1.0.2rc4-py3.6.egg/unicorn/unicorn.py", line 479, in _hookcode_cb
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/devices/rainbow/generics/cortexm.py", line 109, in block_handler
        self.base_block_handler(address)
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/devices/rainbow/rainbow.py", line 414, in base_block_handler
        r = self.stubbed_functions[f](self)
      File "/Users/elieb/git/scaaml/scaaml/scald/tracer/chipwhisperer_leakage_automaton.py", line 205, in trigger_low
        e.emu.hook_del(self.trigger_hook)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/unicorn-1.0.2rc4-py3.6.egg/unicorn/unicorn.py", line 592, in hook_del
    TypeError: an integer is required (got type NoneType)
    Encryption: FAIL
    KEY=2b7e151628aed2a6abf7158809cf4f3c
     PT=6bc1bee22e409f96e93d7e117393172a
    EXP=3ad77bb40d7a3660a89ecaf32466ef97
    
    type:bug component:scald 
    opened by ebursztein 2
  • Correct group and part counters when restarting capture

    Correct group and part counters when restarting capture

    Currently the group and part counters are reset to zero when restarting a failed capture. This produces shards which belong to wrong groups / parts. An existing dataset can be fixed using scaaml.io.Dataset.reshape_into_new_dataset.

    opened by kralka 1
  • Allow SScope to have unused keyword arguments

    Allow SScope to have unused keyword arguments

    The SScope is expected to be initialized using capture_info (to have all information present in capture_info). But capture_info might contain more information which is not needed to initialize SScope. This commit allows SScope to ignore the rest of the capture_info.

    opened by kralka 1
  • Make capture runner appear as not to lose progress

    Make capture runner appear as not to lose progress

    Currently when a capture is restarted, the progress bar is showing how many traces need to be captured and starts at zero percent. This is confusing, since it looks like progress has been lost. It should instead start where it ended (show all shards that have been captured so far as the immediate progress).

    opened by kralka 1
  • Properly reload Dataset when resuming capture

    Properly reload Dataset when resuming capture

    When resuming capture (see PR #24 and PR #25) the info.json file gets rewritten instead of updated. This results in the shard file names not being present in the info.json and possibly broken metadata (such as min and max values).

    opened by kralka 1
  • Implement optimistic version of rank

    Implement optimistic version of rank

    This is useful for combining predictions. One way to combine predictions is to compute arithmetic mean, the other way is to do geometric mean. For numerical stability sum log_10(prediction + epsilon), but this gives negative values which do not play well with optimistic=False).

    opened by kralka 0
  • Make mypy check types inside all functions

    Make mypy check types inside all functions

    By default mypy does not check types in functions without type signature. Add a flag to allow type checking inside such functions. This also removes extraneous comments on PRs.

    opened by kralka 0
  • Fix repeated values for attack points of same name

    Fix repeated values for attack points of same name

    When loading a dataset using as_tfdataset and more bytes of the same attack point are loaded they would have the same value. This is because a dictionary would share state instead of being copied. Thus all attack points would have the same byte index causing the same value to be loaded multiple times.

    Fixes issue #107

    opened by wsxrdv 0
  • Dataset.as_tfdataset loads attack points of the same name multiple times

    Dataset.as_tfdataset loads attack points of the same name multiple times

    There is a state being shared during loading of dataset. Thus if two bytes of an attack point should be loaded two copies of the same get returned (e.g., key byte 1 and byte 3 should be loaded, then the same value is repeated as key_1 and key_3).

    opened by wsxrdv 0
  • ResumeKTI does not work when examples_per_shard = 1

    ResumeKTI does not work when examples_per_shard = 1

    When examples_per_shard == 1 and a restart happens there is no way to tell if the previous shard was already captured and saved or not. Also the scaaml.io.Dataset or DatasetFiller are not equipped to handle saving single example multiple times. To make this reliable there needs to be a way to say "shard saved".

    opened by wsxrdv 0
  • Implement get_scope_settings

    Implement get_scope_settings

    SScope base class should have a method such as def get_scope_settings(self) -> dict[str, str] so that the capture script can do capture_info.update(scope.get_scope_info()) And potentially also something like: def setup_scope(self, capture_info: dict[str, str]) -> None that takes capture_info data and setup of scope using the keys it can understand. This way get_scope_settings() would return all the settings. Sometimes, especially with picoscopes, the scope adjust the settings you give to something it can actually do.

    opened by kralka 0
  • Make writing examples in tfrec files parallel

    Make writing examples in tfrec files parallel

    This could significantly speed up (save days, maybe a week) both capture and converting from other formats to our format.

    Potential pitfalls:

    • running out of memory for very long traces
    • need proper integration with resuming capture
    opened by kralka 0
  • Dataset.move_shards check fails when key is not an attack point

    Dataset.move_shards check fails when key is not an attack point

    Checking for key duplicities (if the same key is in train and test shards at the same time) fails with "key is not an attack point" when key is not an attack point.

    opened by kralka 0
Owner
Google
Google ❤️ Open Source
Google
A Simple Framwork for CV Pre-training Model (SOCO, VirTex, BEiT)

A Simple Framwork for CV Pre-training Model (SOCO, VirTex, BEiT)

Sense-GVT 14 Jul 7, 2022
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 2, 2023
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Manifold-SCA Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning The repo is org

Yuanyuan Yuan 172 Dec 29, 2022
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 2, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
NeuralCompression is a Python repository dedicated to research of neural networks that compress data

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

Facebook Research 297 Jan 6, 2023
NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

null 3 Dec 18, 2021
Measures input lag without dedicated hardware, performing motion detection on recorded or live video

What is InputLagTimer? This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam

Bruno Gonzalez 4 Aug 18, 2022
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 2 Nov 3, 2021
Python implementation of Lightning-rod Agent, the Stack4Things board-side probe

Iotronic Lightning-rod Agent Python implementation of Lightning-rod Agent, the Stack4Things board-side probe. Free software: Apache 2.0 license Websit

null 2 May 19, 2022
The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network.

UNet-SIDE The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network. For Super Reso

TIANTIAN XU 1 Jan 13, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 2, 2023
Efficient Sparse Attacks on Videos using Reinforcement Learning

EARL This repository provides a simple implementation of the work "Efficient Sparse Attacks on Videos using Reinforcement Learning" Example: Demo: Her

null 12 Dec 5, 2021
A community run, 5-day PyTorch Deep Learning Bootcamp

Deep Learning Winter School, November 2107. Tel Aviv Deep Learning Bootcamp : http://deep-ml.com. About Tel-Aviv Deep Learning Bootcamp is an intensiv

Shlomo Kashani. 1.3k Sep 4, 2021
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022