The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

Related tags

Deep Learning MOTIF
Overview

MOTIF Dataset

The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled with ground truth confidence. Family labels were obtained by surveying thousands of open-source threat reports published by 14 major cybersecurity organizations between Jan. 1st, 2016 Jan. 1st, 2021. The dataset also provides a comprehensive alias mapping for each family and EMBER raw features for each file.

Further information about the MOTIF dataset is provided in our paper.

If you use the provided data or code, please make sure to cite our paper:

@misc{joyce2021motif,
      title={MOTIF: A Large Malware Reference Dataset with Ground Truth Family Labels},
      author={Robert J. Joyce and Dev Amlani and Charles Nicholas and Edward Raff},
      year={2021},
      eprint={2111.15031},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Downloading the Dataset

Due to the size of the dataset, you must use Git LFS in order to clone the repository. Installation instructions for Git LFS are linked here. On Debian-based systems, the Git LFS package can be installed using:

sudo apt-get install git-lfs

Once Git LFS is installed, you can clone this repository using:

git lfs clone https://github.com/boozallen/MOTIF.git

Dataset Contents

The main dataset is located in dataset/ and contains the following files:

motif_dataset.jsonl

Each line of motif_dataset.jsonl is a .json object with the following entries:

Name Description
md5 MD5 hash of malware sample
sha1 SHA-1 hash of malware sample
sha256 SHA-256 hash of malware sample
reported_hash Hash of malware sample provided in report
reported_family Normalized family name provided in report
aliases List of known aliases for family
label Unique id for malware family (for ML purposes)
report_source Name of organization that published report
report_date Date report was published
report_url URL of report
report_ioc_url URL to report appendix (if any)
appeared Year and month malware sample was first seen

Each .json object also contains EMBER raw features (version 2) for the file:

Name Description
histogram EMBER histogram
byteentropy EMBER byte histogram
strings EMBER strings metadata
general EMBER general file metadata
header EMBER PE header metadata
section EMBER PE section metadata
imports EMBER imports metadata
exports EMBER exports metadata
datadirectories EMBER data directories metadata

motif_families.csv

This file contains an alias mapping for each of the 454 malware families in the MOTIF dataset. It also contains a succinct description of the family and the threat group or campaign that the family is attributed to (if any).

Column Description
Aliases List of known aliases for family
Description Brief sentence describing capabilities of malware family
Attribution (If any) Name of threat actor malware/campaign is attributed to

motif_reports.csv

This file provides information gathered from our original survey of open-source threat reports. We identified 4,369 malware hashes with 595 distinct reported family names during the survey, but we were unable to obtain some of the files and we restricted the MOTIF dataset to only files in the PE file format. The reported hash, family, source, date, URL, and IOC URL of any malware samples which did not make it into the final MOTIF dataset are located here.

MOTIF.7z

The disarmed malware samples are provided in this 1.47GB encrypted .7z file, which can be unzipped using the following password:

i_assume_all_risk_opening_malware

Each file is named in the format MOTIF_MD5, with MD5 indicating the file's hash prior to when it was disarmed.

X_train.dat and y_train.dat

EMBERv2 feature vectors and labels are provided in X_train.dat and y_train.dat, respectively. Feature vectors were computed using LIEF v0.9.0. These files are named for compatibility with the EMBER read_vectorized_features() function. MOTIF is not split into a training or test set, and X_train.dat and y_train.dat contain feature vectors and labels for the entire dataset.

Benchmark Models

We provide code for training the ML models described in our paper, located in benchmarks/. To support these models, code for modified versions of MalConv2 is included in the MalConv2/ directory.

Requirements:

Packages required for training the ML models can be installed using the following commands:

pip3 install -r requirements.txt
python3 setup.py install

Training the LightGBM or outlier detection models also requires EMBER:

pip3 install git+https://github.com/elastic/ember.git

Training the models:

The LightGBM model can be trained using the following command, where /path/to/MOTIF/dataset/ indicates the path to the dataset/ directory.

python3 lgbm.py /path/to/MOTIF/dataset/

The MalConv2 model can be trained using the following command, where /path/to/MOTIF/MOTIF_defanged/ indicates the path to the unzipped folder containing the disarmed malware samples:

python3 malconv.py /path/to/MOTIF/MOTIF_defanged/ /path/to/MOTIF/dataset/motif_dataset.jsonl

The three outlier detection models can be trained using the following command:

python3 outliers.py /path/to/MOTIF/dataset/

Proper Use of Data

Use of this dataset must follow the provided terms of licensing. We intend this dataset to be used for research purposes and have taken measures to prevent abuse by attackers. All files are prevented from running using the same technique as the SOREL dataset. We refer to their statement regarding safety and abuse of the data.

The malware we’re releasing is “disarmed” so that it will not execute. This means it would take knowledge, skill, and time to reconstitute the samples and get them to actually run. That said, we recognize that there is at least some possibility that a skilled attacker could learn techniques from these samples or use samples from the dataset to assemble attack tools to use as part of their malicious activities. However, in reality, there are already many other sources attackers could leverage to gain access to malware information and samples that are easier, faster and more cost effective to use. In other words, this disarmed sample set will have much more value to researchers looking to improve and develop their independent defenses than it will have to attackers.

You might also like...
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

 PAWS 🐾 Predicting View-Assignments with Support Samples
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples.

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

 Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Comments
  • Packed and unpacked samples

    Packed and unpacked samples

    Hello, is there any information in the dataset that definitively tells me whether a particular sample is packed or unpacked? Or is it something that I have to analyze myself? Thanks!

    opened by nikhilh-20 1
  • Fix the sha1 and sha256 hash fields in dataset/motif_dataset.json

    Fix the sha1 and sha256 hash fields in dataset/motif_dataset.json

    The sha1 and sha256 fields in the dataset/motif_dataset.json were all the same, matching the sample referenced in the last line. Queried VT to find matching sha1 and sha256 for the md5 listed in the file.

    opened by valakuzhyk 0
Owner
Booz Allen Hamilton
The official GitHub organization of Booz Allen Hamilton
Booz Allen Hamilton
YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset

YOLOv5 ?? is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research int

阿才 73 Dec 16, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

null 144 Dec 30, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 5, 2022
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

torch-imle Concise and self-contained PyTorch library implementing the I-MLE gradient estimator proposed in our NeurIPS 2021 paper Implicit MLE: Backp

UCL Natural Language Processing 249 Jan 3, 2023
unet-family: Ultimate version

unet-family: Ultimate version 基于之前my-unet代码,我整理出来了这一份终极版本unet-family,方便其他人阅读。 相比于之前的my-unet代码,代码分类更加规范,有条理 对于clone下来的代码不需要修改各种复杂繁琐的路径问题,直接就可以运行。 并且代码有

null 2 Sep 19, 2022
Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 54 Dec 6, 2022
RITA is a family of autoregressive protein models, developed by LightOn in collaboration with the OATML group at Oxford and the Debora Marks Lab at Harvard.

RITA: a Study on Scaling Up Generative Protein Sequence Models RITA is a family of autoregressive protein models, developed by a collaboration of Ligh

LightOn 69 Dec 22, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021