Toy example of an applied ML pipeline for me to experiment with MLOps tools.

Overview

Toy Machine Learning Pipeline

Table of Contents
  1. About
  2. Getting Started
  3. ML task description and evaluation procedure
  4. Dataset description
  5. Repository structure
  6. Utils documentation
  7. Roadmap
  8. Contributing
  9. Contact

About

This is a toy example of a standalone ML pipeline written entirely in Python. No external tools are incorporated into the master branch. I built this for two reasons:

  1. To experiment with my own ideas for MLOps tools, as it is hard to develop devtools in a vacuum :)
  2. To have something to integrate existing MLOps tools with so I can have real opinions

The following diagram describes the pipeline at a high level. The README describes it in more detail.

Diagram

Getting started

This pipeline is broken down into several components, described in a high level by the directories in this repository. See the Makefile for various commands you can run, but to serve the inference API locally, you can do the following:

  1. git clone the repository
  2. In the root directory of the repo, run make serve
  3. [OPTIONAL] In a new tab, run make inference to ping the API with some sample records

All Python dependencies and virtual environment creation is handled by the Makefile. See setup.py to see the packages installed into the virtual environment, which mainly consist of basic Python packages such as pandas or sklearn.

ML task description and evaluation procedure

We train a model to predict whether a passenger in a NYC taxicab ride will give the driver a large tip. This is a binary classification task. A large tip is arbitrarily defined as greater than 20% of the total fare (before tip). To evaluate the model or measure the efficacy of the model, we measure the F1 score.

The current best model is an instance of sklearn.ensemble.RandomForestClassifier with max_depth of 10 and other default parameters. The test set F1 score is 0.716. I explored this toy task earlier in my debugging ML talk.

Dataset description

We use the yellow taxicab trip records from the NYC Taxi & Limousine Comission public dataset, which is stored in a public aws S3 bucket. The data dictionary can be found here and is also shown below:

Field Name Description
VendorID A code indicating the TPEP provider that provided the record. 1= Creative Mobile Technologies, LLC; 2= VeriFone Inc.
tpep_pickup_datetime The date and time when the meter was engaged.
tpep_dropoff_datetime The date and time when the meter was disengaged.
Passenger_count The number of passengers in the vehicle. This is a driver-entered value.
Trip_distance The elapsed trip distance in miles reported by the taximeter.
PULocationID TLC Taxi Zone in which the taximeter was engaged.
DOLocationID TLC Taxi Zone in which the taximeter was disengaged
RateCodeID The final rate code in effect at the end of the trip. 1= Standard rate, 2=JFK, 3=Newark, 4=Nassau or Westchester, 5=Negotiated fare, 6=Group ride
Store_and_fwd_flag This flag indicates whether the trip record was held in vehicle memory before sending to the vendor, aka “store and forward,” because the vehicle did not have a connection to the server. Y= store and forward trip, N= not a store and forward trip
Payment_type A numeric code signifying how the passenger paid for the trip. 1= Credit card, 2= Cash, 3= No charge, 4= Dispute, 5= Unknown, 6= Voided trip
Fare_amount The time-and-distance fare calculated by the meter.
Extra Miscellaneous extras and surcharges. Currently, this only includes the $0.50 and $1 rush hour and overnight charges.
MTA_tax $0.50 MTA tax that is automatically triggered based on the metered rate in use.
Improvement_surcharge $0.30 improvement surcharge assessed trips at the flag drop. The improvement surcharge began being levied in 2015.
Tip_amount Tip amount – This field is automatically populated for credit card tips. Cash tips are not included.
Tolls_amount Total amount of all tolls paid in trip.
Total_amount The total amount charged to passengers. Does not include cash tips.

Repository structure

The pipeline contains multiple components, each organized into the following high-level subdirectories:

  • etl
  • training
  • inference

Pipeline components

Any applied ML pipeline is essentially a series of functions applied one after the other, such as data transformations, models, and output transformations. This pipeline was initially built in a lightweight fashion to run on a regular laptop with around 8 GB of RAM. The logic in these components is a first pass; there is a lot of room to improve.

The following table describes the components of this pipeline, in order:

Name Description How to run File(s)
Cleaning Reads the dataset (stored in a public S3 bucket) and performs very basic cleaning (drops rows outside the time range or with $0-valued fares) make cleaning etl/cleaning.py
Featuregen Generates basic features for the ML model make featuregen etl/featuregen.py
Split Splits the features into train and test sets make split training/split.py
Training Trains a random forest classifier on the train set and evaluates it on the test set make training training/train.py
Inference Locally serves an API that is essentially a wrapper around the predict function make serve, make inference [inference/app.py, inference/inference.py]

Data storage

The inputs and outputs for the pipeline components, as well as other artifacts, are stored in a public S3 bucket named toy-applied-ml-pipeline located in us-west-1. Read access is universal and doesn't require special permissions. Write access is limited to those with credentials. If you are interested in contributing and want write access, please contact me directly describing how you would like to be involved, and I can send you keys.

The bucket has a scratch folder, where random scratch files live. These random scratch files were likely generated by the write_file function in utils.io. The bulk of the bucket lies in the dev directory, or s3://toy-applied-ml-pipeline/dev.

The dev directory's subdirectories represent the components in the pipeline. These subdirectories contain the outputs of each component respectively, where the outputs are versioned with the timestamp the component was run. The utils.io library contains helper functions to write outputs and load the latest component output as input to another component. To inspect the filesystem structure further, you can call io.list_files(dirname), which returns the immediate files in dirname.

If you have write permissions, store your keys/ids in an .env file, and the Makefile will automatically pick it up. If you do not have write permissions, you will run into an error if you try to write to the S3 bucket.

Utils documentation

The utils directory contains helper functions and abstractions for expanding upon the current pipeline. Tests are in utils/tests.py. Note that only the io functions are tested as of now.

io

utils/io.py contains various helper functions to interface with S3. The two most useful functions are:

def load_output_df(component: str, dev: bool = True, version: str = None) -> pd.DataFrame:
  """
    This function loads the latest version of data that was produced by a component.
    Args:
        component (str): component name that we want to get the output from
        dev (bool): whether this is run in development or "production" mode
        version (str, optional): specified version of the data
    Returns:
        df (pd.DataFrame): dataframe corresponding to the data in the latest version of the output for the specified component
    """
    ...

def save_output_df(df: pd.DataFrame, component: str, dev: bool = True, overwrite: bool = False, version: str = None) -> str:
    """
    This function writes the output of a pipeline component (a dataframe) to a parquet file.
    Args:
        df (pd.DataFrame): dataframe representing the output
        component (str): name of the component that produced the output (ex: clean)
        dev (bool, optional): whether this is run in development or "production" mode
        overwrite (bool, optional): whether to overwrite a file with the same name
        version (str, optional): optional version for the output. If not specified, the function will create the version number.
    Returns:
        path (str): Full path that the file can be accessed at
    """
    ...

Note that save_output_df's default parameters are set such that you cannot overwrite an existing file. You can change this by setting overwrite = True.

Feature generators

utils.feature_generators.py contains the lightweight abstraction for a feature generator to make it easy for someone to create a new feature. The abstraction is as follows:

class FeatureGenerator(ABC):
    """Abstract class for a feature generator."""

    def __init__(self, name: str, required_columns: typing.List[str]):
        """Constructor stores the name of the feature and columns required in a df to construct that feature."""
        self.name = name
        self.required_columns = required_columns

    @abstractmethod
    def compute(self):
        pass

    @abstractmethod
    def schema(self):
        pass

See utils.feature_generators.py for examples on how to create specific feature types and etl/featuregen.py for an example on how to create the actual instances of the features themselves.

Models

utils/models.py contains the ModelWrapper abstraction. This abstraction is essentially a wrapper around a model and consists of:

  • the model binary
  • pointer to dataset(s)
  • metric values

To use this abstraction, you must create a subclass of ModelWrapper and implement the preprocess, train, predict, and score methods. The base class also provides methods to save and load the ModelWrapper object. It will fail to save if the client has not added data paths and metrics to the object.

An example of a subclass of ModelWrapper is the RandomForestModelWrapper, which is also found in utils/models.py. The RandomForestModelWrapper client usage example is in training/train.py and is partially shown below:

from utils import models

# Create and train model
mw = models.RandomForestModelWrapper(
    feature_columns=feature_columns, model_params=model_params)
mw.train(train_df, label_column)

# Score model
train_score = mw.score(train_df, label_column)
test_score = mw.score(test_df, label_column)

mw.add_data_path('train_df', train_file_path)
mw.add_data_path('test_df', test_file_path)
mw.add_metric('train_f1', train_score)
mw.add_metric('test_f1', test_score)

# Save model
print(mw.save('training/models'))

# Load latest model version
reloaded_mw = models.RandomForestModelWrapper.load('training/models')
test_preds = reloaded_mw.predict(test_df)

Roadmap

See the open issues for tickets corresponding to feature ideas. The issues in this repo are mainly tagged either data science or engineering.

Contributing

Having a toy example of an ML pipeline isn't just nice to have for people experimenting with MLOps tools. ML beginners or data science enthusiasts looking to understand how to build pipelines around ML models can also benefit from this repository.

Anyone is welcome to contribute, and your contribution is greatly appreciated! Feel free to either create issues or pull requests to address issues.

  1. Fork the repo
  2. Create your branch (git checkout -b YOUR_GITHUB_USERNAME/somefeature)
  3. Make changes and add files to the commit (git add .)
  4. Commit your changes (git commit -m 'Add something')
  5. Push to your branch (git push origin YOUR_GITHUB_USERNAME/somefeature)
  6. Make a pull request

Contact

Original author: Shreya Shankar

Email: [email protected]

Comments
  • Create the first EDA notebook

    Create the first EDA notebook

    Closes issue #50.

    • Create the eda directory and the first EDA notebook.
    • Add checkpoints files into .gitignore

    It's left to write how to run the notebooks in the README.

    best,

    opened by alcazar90 3
  • Problems running the pipeline

    Problems running the pipeline

    Hi @shreyashankar !

    Thanks for sharing the project.

    I want to understand the implementation to start contributing to the tasks open in the roadmap. However, I have problems when I try to run the first component of the pipeline.

    docker run --env-file=./.env toy-ml-pipeline cleaning

    image

    I am a newbie with docker; I built the image perfectly following the instructions.

    image

    The error message tells me that docker didn't find any ./.env file when I ran the command docker run --env-file.

    Reading this blog post, section "3. Take values from a file (env_file)", the docker expects to find an ./.env on the current directory. This file is created when the image is built? or it's supposed that it is in the repository?

    Best, Cristóbal

    opened by alcazar90 1
  • Write a script to run all the components

    Write a script to run all the components

    Currently, one can only run individual components, and they must do so by running several make commands in succession. Write a command or script to run all the components.

    good first issue engineering 
    opened by shreyashankar 0
  • Better cleaning logic

    Better cleaning logic

    The current cleaning logic (remove_zero_fare_and_oob_rows in utils/helpers.py) is very basic. Do some EDA to come up with better criteria for "clean" data.

    data science 
    opened by shreyashankar 0
  • Improve on current model

    Improve on current model

    Some ideas:

    • better features
    • different model architecture

    To incorporate, submit a PR with a comparison between the new model's results and the current best model's results.

    data science 
    opened by shreyashankar 0
Owner
Shreya Shankar
Trying to make machine learning work in the real world. Previously at @viaduct-ai, @google-research, @facebook, and @Stanford computer science.
Shreya Shankar
skweak: A software toolkit for weak supervision applied to NLP tasks

Labelled data remains a scarce resource in many practical NLP scenarios. This is especially the case when working with resource-poor languages (or text domains), or when using task-specific labels without pre-existing datasets. The only available option is often to collect and annotate texts by hand, which is expensive and time-consuming.

Norsk Regnesentral (Norwegian Computing Center) 850 Dec 28, 2022
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework for Question Answering & Neural search that enables you to ... ... ask questions in natural language and find gran

deepset 6.4k Jan 9, 2023
A full spaCy pipeline and models for scientific/biomedical documents.

This repository contains custom pipes and models related to using spaCy for scientific documents. In particular, there is a custom tokenizer that adds

AI2 1.3k Jan 3, 2023
A full spaCy pipeline and models for scientific/biomedical documents.

This repository contains custom pipes and models related to using spaCy for scientific documents. In particular, there is a custom tokenizer that adds

AI2 831 Feb 17, 2021
DaCy: The State of the Art Danish NLP pipeline using SpaCy

DaCy: A SpaCy NLP Pipeline for Danish DaCy is a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Ar

Kenneth Enevoldsen 71 Jan 6, 2023
Pipeline for chemical image-to-text competition

BMS-Molecular-Translation Introduction This is a pipeline for Bristol-Myers Squibb – Molecular Translation by Vadim Timakin and Maksim Zhdanov. We got

Maksim Zhdanov 7 Sep 20, 2022
This repository contains all the source code that is needed for the project : An Efficient Pipeline For Bloom’s Taxonomy Using Natural Language Processing and Deep Learning

Pipeline For NLP with Bloom's Taxonomy Using Improved Question Classification and Question Generation using Deep Learning This repository contains all

Rohan Mathur 9 Jul 17, 2021
Simplified diarization pipeline using some pretrained models - audio file to diarized segments in a few lines of code

simple_diarizer Simplified diarization pipeline using some pretrained models. Made to be a simple as possible to go from an input audio file to diariz

Chau 65 Dec 30, 2022
Pipeline for fast building text classification TF-IDF + LogReg baselines.

Text Classification Baseline Pipeline for fast building text classification TF-IDF + LogReg baselines. Usage Instead of writing custom code for specif

Dani El-Ayyass 57 Dec 7, 2022
A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Artifici Online Services inc. 74 Oct 7, 2022
MHtyper is an end-to-end pipeline for recognized the Forensic microhaplotypes in Nanopore sequencing data.

MHtyper is an end-to-end pipeline for recognized the Forensic microhaplotypes in Nanopore sequencing data. It is implemented using Python.

willow 6 Jun 27, 2022
BookNLP, a natural language processing pipeline for books

BookNLP BookNLP is a natural language processing pipeline that scales to books and other long documents (in English), including: Part-of-speech taggin

null 654 Jan 2, 2023
This is the offline-training-pipeline for our project.

offline-training-pipeline This is the offline-training-pipeline for our project. We adopt the offline training and online prediction Machine Learning

null 0 Apr 22, 2022
Laboratory for Social Machines 84 Dec 20, 2022
Vad-sli-asr - A Python scripts for a speech processing pipeline with Voice Activity Detection (VAD)

VAD-SLI-ASR Python scripts for a speech processing pipeline with Voice Activity

Dynamics of Language 14 Dec 9, 2022
An example project using OpenPrompt under pytorch-lightning for prompt-based SST2 sentiment analysis model

pl_prompt_sst An example project using OpenPrompt under the framework of pytorch-lightning for a training prompt-based text classification model on SS

Zhiling Zhang 5 Oct 21, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Dec 30, 2022
Tools, wrappers, etc... for data science with a concentration on text processing

Rosetta Tools for data science with a focus on text processing. Focuses on "medium data", i.e. data too big to fit into memory but too small to necess

null 207 Nov 22, 2022