AFLFast (extends AFL with Power Schedules)

Overview

AFLFast

Power schedules implemented by Marcel Böhme <[email protected]>. AFLFast is an extension of AFL which is written and maintained by Michal Zalewski <[email protected]>.

Update: Checkout AFL++ which is actively maintained and implements AFLFast power schedules!

AFLFast is a fork of AFL that has been shown to outperform AFL 1.96b by an order of magnitude! It helped in the success of Team Codejitsu at the finals of the DARPA Cyber Grand Challenge where their bot Galactica took 2nd place in terms of #POVs proven (see red bar at https://www.cybergrandchallenge.com/event#results). AFLFast exposed several previously unreported CVEs that could not be exposed by AFL in 24 hours and otherwise exposed vulnerabilities significantly faster than AFL while generating orders of magnitude more unique crashes.

Essentially, we observed that most generated inputs exercise the same few "high-frequency" paths and developed strategies to gravitate towards low-frequency paths, to stress significantly more program behavior in the same amount of time. We devised several search strategies that decide in which order the seeds should be fuzzed and power schedules that smartly regulate the number of inputs generated from a seed (i.e., the time spent fuzzing a seed). We call the number of inputs generated from a seed, the seed's energy.

We find that AFL's exploitation-based constant schedule assigns too much energy to seeds exercising high-frequency paths (e.g., paths that reject invalid inputs) and not enough energy to seeds exercising low-frequency paths (e.g., paths that stress interesting behaviors). Technically, we modified the computation of a seed's performance score (calculate_score), which seed is marked as favourite (update_bitmap_score), and which seed is chosen next from the circular queue (main). We implemented the following schedules (in the order of their effectiveness, best first):

AFL flag Power Schedule
-p fast (default) FAST
-p coe COE
-p explore EXPLORE
-p quad QUAD
-p lin LIN
-p exploit (AFL) LIN
where α(i) is the performance score that AFL uses to compute for the seed input i, β(i)>1 is a constant, s(i) is the number of times that seed i has been chosen from the queue, f(i) is the number of generated inputs that exercise the same path as seed i, and μ is the average number of generated inputs exercising a path.

More details can be found in our paper that was recently accepted at the 23rd ACM Conference on Computer and Communications Security (CCS'16).

PS: The most recent version of AFL (2.33b) implements the explore schedule which yielded a significance performance boost. We are currently conducting experiments with a hybrid version between AFLFast and 2.33b and report back soon.

PPS: In parallel mode (several instances with shared queue), we suggest to run the master using the exploit schedule (-p exploit) and the slaves with a combination of cut-off-exponential (-p coe), exponential (-p fast; default), and explore (-p explore) schedules. In single mode, the default settings will do. EDIT: In parallel mode, AFLFast seems to perform poorly because the path probability estimates are incorrect for the imported seeds. Pull requests to fix this issue by syncing the estimates accross instances are appreciated :)

Copyright 2013, 2014, 2015, 2016 Google Inc. All rights reserved. Released under terms and conditions of Apache License, Version 2.0.

You might also like...
Easy and comprehensive assessment of predictive power, with support for neuroimaging features
Easy and comprehensive assessment of predictive power, with support for neuroimaging features

Documentation: https://raamana.github.io/neuropredict/ News As of v0.6, neuropredict now supports regression applications i.e. predicting continuous t

Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)
Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Expressive Power of Invariant and Equivaraint Graph Neural Networks In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alig

Draw like Bob Ross using the power of Neural Networks (With PyTorch)!
Draw like Bob Ross using the power of Neural Networks (With PyTorch)!

Draw like Bob Ross using the power of Neural Networks! (+ Pytorch) Learning Process Visualization Getting started Install dependecies Requires python3

 The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning Implementation of soft embeddings from https://arxiv.org/abs/2104.08691v1 using Pytorch and H

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Official implementation of the ICCV 2021 paper:
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Comments
  • Solve empty and non-performing seed case

    Solve empty and non-performing seed case

    Sorry in advance, this is a bit longer...(instructions to reproduce at the end) TLDR: If every seed has perf_score==0, AFLFast (default) will stop fuzzing and only cycle through seeds.

    Longer version: Why I ran some experiments with AFLFast (default schedule) with the empty seed, as described in the paper. While this works with several targets just fine (e.g. binutils), it implodes on targets that are "a bit harder to explore", due to an edge case in the performance_score calculation.
    By "imploding" I mean: AFLFast runs for a few thousand iterations fine (depending on the target ~5-10k total execs), then suddenly exec speed drops to 0/sec, and cycles done explodes into millions.

    As of now I've hit this problem with 2/8 targets (tcpdump and djpeg are causing issues).

    Problem I believe the problem lies in the performance calculation: Here are the relevant code snippets:

    inside calculate_score():

    case FAST:
          if (q->fuzz_level < 16) {
             factor = ((u32) (1 << q->fuzz_level)) / (fuzz == 0 ? 1 : fuzz);
          } else {
            factor = MAX_FACTOR / (fuzz == 0 ? 1 : next_p2 (fuzz));
          }
    break;
    
      if (factor > MAX_FACTOR)
        factor = MAX_FACTOR;
      perf_score *= factor / POWER_BETA;
    

    using calculate_score()

      orig_perf = perf_score = calculate_score(queue_cur);
    
      if (perf_score == 0) goto abandon_entry;
    
    • Scenario 1:

    If we happen to find something with our empty seed, new paths will be scheduled, q->fuzz_level remains fairly small and factor is most likely > 0. Even if factor==0 and therefore resulting perf_score==0, we have no problem, as we simply skip to another seed.

    • Scenario 2:

    If we happen not to find any new seeds, q->fuzz_level steadily increases and as soon as q->fuzz_level >=16 we go in the else statement:

    • factor = MAX_FACTOR / (fuzz == 0 ? 1 : next_p2 (fuzz)); //will almost always be 0, as fuzz is already high
    • perf_score *= factor / POWER_BETA; //will then always be 0
    • if (perf_score == 0) goto abandon_entry //will then always be true

    this means we are constantly skipping to the next seed, but as we only have 1 empty seed, we will constantly skip to the same one. This will explode in cycles and always skip fuzzing altogether.

    This could be solved in 2 ways:

    1. either patch performance score above, e.g. set factor = MAX_FACTOR
    2. or implement a check, so we don't skip a seed, if we don't have anything to skip to, which is what this PR is for. We could do either of the following: 2.1 Check whether all seeds have performance_score==0, and if so, don't skip. This is probably expensive 2.2 Check for queued_paths, and only skip if we have a few. To be accurate: this doesn't completely solve the edge case. If every seed has a performance score of 0, this still occurs. However, with this fix the chances of it occurring should be much lower, and additionally, the changes to the original implementation are minimal. This is the suggestion of this PR. 2.3 Similar checks are implemented https://github.com/derdav3/aflfast/blob/master/afl-fuzz.c#L5045, so another solution would be to check for pending_favorites, but I didnt investigate this further.

    With the suggested code change, all my targets run just fine.

    Reproduction

    • Target:
    sudo apt-get install libpcap-dev
    
    wget https://www.tcpdump.org/release/tcpdump-4.9.2.tar.gz
    tar -xf tcpdump-4.9.2
    cd tcpdump-4.9.2
    
    CC=~/afl/afl-clang-fast CXX=~/afl/afl-clang-fast++ LD=~/afl/afl-clang-fast ./configure
    make -j $(nproc)
    
    • Seed echo "" > empty
    • Fuzzing ~/aflfast/afl-fuzz -i ./empty/ -o ./out -- ./tcpdump -nr @@

    PS: Depending on your clang/llvm version, you might need [https://github.com/derdav3/aflfast/commit/2793c0920f3db51fb9bcb80301018d7e032fafc5](this patch) to compile aflfast.

    opened by ackdav 4
  • Fix -Q flag

    Fix -Q flag

    The change that added the -p command line argument broke the -Q flag, which is not supposed to have an optional argument and thus should not be followed by : in getopt.

    opened by EliaGeretto 1
SelfAugment extends MoCo to include automatic unsupervised augmentation selection.

SelfAugment extends MoCo to include automatic unsupervised augmentation selection. In addition, we've included the ability to pretrain on several new datasets and included a wandb integration.

Colorado Reed 24 Oct 26, 2022
An AFL implementation with UnTracer (our coverage-guided tracer)

UnTracer-AFL This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuz

null 113 Dec 17, 2022
FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware.

FIRM-AFL FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware. FIRM-AFL addresses two fundamental problems in IoT fuzzing. First, it

null 356 Dec 23, 2022
Fuzzing the Kernel Using Unicornafl and AFL++

Unicorefuzz Fuzzing the Kernel using UnicornAFL and AFL++. For details, skim through the WOOT paper or watch this talk at CCCamp19. Is it any good? ye

Security in Telecommunications 283 Dec 26, 2022
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 6, 2023
FairFuzz: AFL extension targeting rare branches

FairFuzz An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested str

Caroline Lemieux 222 Nov 16, 2022
IJON is an annotation mechanism that analysts can use to guide fuzzers such as AFL.

IJON SPACE EXPLORER IJON is an annotation mechanism that analysts can use to guide fuzzers such as AFL. Using only a small (usually one line) annotati

Chair for Sys­tems Se­cu­ri­ty 146 Dec 16, 2022
Directed Greybox Fuzzing with AFL

AFLGo: Directed Greybox Fuzzing AFLGo is an extension of American Fuzzy Lop (AFL). Given a set of target locations (e.g., folder/file.c:582), AFLGo ge

null 380 Nov 24, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

null 172 Dec 18, 2022
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

null 242 Dec 12, 2022