A data engineering project with Kafka, Spark Streaming, dbt, Docker, Airflow, Terraform, GCP and much more!

Overview

Streamify

A data pipeline with Kafka, Spark Streaming, dbt, Docker, Airflow, Terraform, GCP and much more!

Description

Objective

The project will stream events generated from a fake music streaming service (like Spotify) and create a data pipeline that consumes the real-time data. The data coming in would be similar to an event of a user listening to a song, navigating on the website, authenticating. The data would be processed in real-time and stored to the data lake periodically (every two minutes). The hourly batch job will then consume this data, apply transformations, and create the desired tables for our dashboard to generate analytics. We will try to analyze metrics like popular songs, active users, user demographics etc.

Dataset

Eventsim is a program that generates event data to replicate page requests for a fake music web site. The results look like real use data, but are totally fake. The docker image is borrowed from viirya's fork of it, as the original project has gone without maintenance for a few years now.

Eventsim uses song data from Million Songs Dataset to generate events. I have used a subset of 10000 songs.

Tools & Technologies

Architecture

streamify-architecture

Final Result

dashboard

Setup

WARNING: You will be charged for all the infra setup. You can avail 300$ in credit by creating a new account on GCP.

Pre-requisites

If you already have a Google Cloud account and a working terraform setup, you can skip the pre-requisite steps.

Get Going!

A video walkthrough of how I run my project - YouTube Video

  • Procure infra on GCP with Terraform - Setup
  • (Extra) SSH into your VMs, Forward Ports - Setup
  • Setup Kafka Compute Instance and start sending messages from Eventsim - Setup
  • Setup Spark Cluster for stream processing - Setup
  • Setup Airflow on Compute Instance to trigger the hourly data pipeline - Setup

Debug

If you run into issues, see if you find something in this debug guide.

How can I make this better?!

A lot can still be done :).

  • Choose managed Infra
    • Cloud Composer for Airflow
    • Confluent Cloud for Kafka
  • Create your own VPC network
  • Build dimensions and facts incrementally instead of full refresh
  • Write data quality tests
  • Create dimensional models for additional business processes
  • Include CI/CD
  • Add more visualizations

Special Mentions

I'd like to thank the DataTalks.Club for offering this Data Engineering course for completely free. All the things I learnt there, enabled me to come up with this project. If you want to upskill on Data Engineering technologies, please check out the course. :)

You might also like...
GCP Scripts and API Client Toolss

GCP Scripts and API Client Toolss Script Authentication The scripts and CLI assume GCP Application Default Credentials are set. Credentials can be set

πŸ€–πŸ€– Jarvis is an virtual assistant which can some tasks easy for you like surfing on web opening an app and much more... πŸ€–πŸ€–

Jarvis πŸ€– πŸ€– Jarvis is an virtual assistant which can some tasks easy for you like surfing on web opening an app and much more... πŸ€– πŸ€– Developer : su

Airflow Operator for running Soda SQL scans

Airflow Operator for running Soda SQL scans

A tutorial presents several practical examples of how to build DAGs in Apache Airflow
A tutorial presents several practical examples of how to build DAGs in Apache Airflow

Apache Airflow - Python Brasil 2021 Este tutorial apresenta vΓ‘rios exemplos prΓ‘ticos de como construir DAGs no Apache Airflow. Background Apache Airfl

RepositΓ³rio para estudo do airflow

airflow-101 RepositΓ³rio para estudo do airflow Docker criado baseado no tutorial Exemplo de API da pokeapi Para executar clone o repo execute as confi

A reproduction repo for a Scheduling bug in AirFlow 2.2.3
A reproduction repo for a Scheduling bug in AirFlow 2.2.3

A reproduction repo for a Scheduling bug in AirFlow 2.2.3

A docker container (Docker Desktop) for a simple python Web app few unit tested

Short web app using Flask, tested with unittest on making massive requests, responses of the website, containerized

Here is my Senior Design Project that I implemented to graduate from Computer Engineering.
Here is my Senior Design Project that I implemented to graduate from Computer Engineering.

Here is my Senior Design Project that I implemented to graduate from Computer Engineering. It is a chatbot made in RASA and helps the user to plan their vacation in the Turkish language. In order to plan the user's vacation, it provides reservations by asking various questions for hotel, flight, or event.

YunoHost is an operating system aiming to simplify as much as possible the administration of a server.
YunoHost is an operating system aiming to simplify as much as possible the administration of a server.

YunoHost is an operating system aiming to simplify as much as possible the administration of a server. This repository corresponds to the core code, written mostly in Python and Bash.

Comments
  • The number of topics in kafka control center did not increase

    The number of topics in kafka control center did not increase

    After creating docker container for the kafka instance and running the script to send message to it with the eventsim, the number of topics in the control center (9021) would not increase. I remained at 40.

    Notice the warning at the top. What could that mean please?

    jwKsvLXT jpg large

    opened by topefolorunso 1
  • Error while provisioning spark cluster with terraform

    Error while provisioning spark cluster with terraform

    I used e2-standard-2 for the kafka and airflow instances I then used e2-medium for both master and worker in the spark cluster Apparently, the kafka and airflow instances were successful.

    google_dataproc_cluster.mulitnode_spark_cluster: Creating...
    β•·
    β”‚ Error: Error creating Dataproc cluster: googleapi: Error 400: Multiple validation errors:
    β”‚  - Insufficient 'CPUS' quota. Requested 6.0, available 4.0.
    β”‚  - Insufficient 'IN_USE_ADDRESSES' quota. Requested 3.0, available 2.0.
    β”‚  - This request exceeds CPU quota. Some things to try: request fewer workers (a minimum of 2 is required), 
    use smaller master and/or worker machine types (such as n1-standard-2)., badRequest
    β”‚
    β”‚   with google_dataproc_cluster.mulitnode_spark_cluster,
    β”‚   on main.tf line 93, in resource "google_dataproc_cluster" "mulitnode_spark_cluster":
    β”‚   93: resource "google_dataproc_cluster" "mulitnode_spark_cluster" {
    β”‚
    
    opened by topefolorunso 3
  • I have an error when I launch the terraform apply

    I have an error when I launch the terraform apply

    β”‚ Error: Invalid value for network: project: required field is not set β”‚ β”‚ with google_compute_firewall.rules, β”‚ on main.tf line 19, in resource "google_compute_firewall" "port_rules": β”‚ 19: resource "google_compute_firewall" "port_rules" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_compute_instance.kafka_vm_instance, β”‚ on main.tf line 35, in resource "google_compute_instance" "kafka_vm_instance": β”‚ 35: resource "google_compute_instance" "kafka_vm_instance" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_compute_instance.airflow_vm_instance, β”‚ on main.tf line 56, in resource "google_compute_instance" "airflow_vm_instance": β”‚ 56: resource "google_compute_instance" "airflow_vm_instance" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_storage_bucket.bucket, β”‚ on main.tf line 75, in resource "google_storage_bucket" "bucket": β”‚ 75: resource "google_storage_bucket" "bucket" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_dataproc_cluster.mulitnode_spark_cluster, β”‚ on main.tf line 93, in resource "google_dataproc_cluster" "mulitnode_spark_cluster": β”‚ 93: resource "google_dataproc_cluster" "mulitnode_spark_cluster" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_bigquery_dataset.stg_dataset, β”‚ on main.tf line 139, in resource "google_bigquery_dataset" "stg_dataset": β”‚ 139: resource "google_bigquery_dataset" "stg_dataset" { β”‚ β•΅ β•· β”‚ Error: project: required field is not set β”‚ β”‚ with google_bigquery_dataset.prod_dataset, β”‚ on main.tf line 146, in resource "google_bigquery_dataset" "prod_dataset": β”‚ 146: resource "google_bigquery_dataset" "prod_dataset" { β”‚

    opened by mbiombani 6
  • Command for spark-submit

    Command for spark-submit

    I have a question regarding the command for spark-submit:

    spark-submit \
        --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 \
        stream_all_events.py
    

    What is the meaning of the packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2?

    opened by stephenllh 27
Owner
Ankur Chavda
Data Engineer-ish
Ankur Chavda
An Airflow operator to call the main function from the dbt-core Python package

airflow-dbt-python An Airflow operator to call the main function from the dbt-core Python package Motivation Airflow running in a managed environment

TomΓ‘s FarΓ­as Santana 77 Sep 30, 2022
This is a practice on Airflow, which is building virtual env, installing Airflow and constructing data pipeline (DAGs)

airflow-test This is a practice on Airflow, which is Builing virtualbox env and setting Airflow on that env Installing Airflow using python virtual en

Jaeyoung 1 Nov 1, 2021
Yet another Airflow plugin using CLI command as RESTful api, supports Airflow v2.X.

δΈ­ζ–‡η‰ˆζ–‡ζ‘£ Airflow Extended API Plugin Airflow Extended API, which export airflow CLI command as REST-ful API to extend the ability of airflow official API

Eric Cao 106 Sep 17, 2022
Project repository of Apache Airflow, deployed on Docker in Amazon EC2 via GitLab.

Airflow on Docker in EC2 + GitLab's CI/CD Personal project for simple data pipeline using Airflow. Airflow will be installed inside Docker container,

Ammar Chalifah 10 Sep 15, 2022
dbt (data build tool) adapter for Oracle Autonomous Database

dbt-oracle version 1.0.0 dbt (data build tool) adapter for the Oracle database. dbt "adapters" are responsible for adapting dbt's functionality to a g

Oracle 17 Sep 23, 2022
Make dbt docs and Apache Superset talk to one another

dbt-superset-lineage Make dbt docs and Apache Superset talk to one another Why do I need something like this? Odds are rather high that you use dbt to

Slido 67 Sep 29, 2022
:fishing_pole_and_fish: List of `pre-commit` hooks to ensure the quality of your `dbt` projects.

pre-commit-dbt List of pre-commit hooks to ensure the quality of your dbt projects. BETA NOTICE: This tool is still BETA and may have some bugs, so pl

Offbi 230 Sep 22, 2022
dbt adapter for Firebolt

dbt-firebolt dbt adapter for Firebolt dbt-firebolt supports dbt 0.21 and newer Installation First, download the JDBC driver and place it wherever you'

null 22 Sep 13, 2022
Model synchronization from dbt to Metabase.

dbt-metabase Model synchronization from dbt to Metabase. If dbt is your source of truth for database schemas and you use Metabase as your analytics to

Mike Gouline 234 Sep 30, 2022
A minimal configuration for a dockerized kafka project.

Docker Kafka Quickstart A minimal configuration for a dockerized kafka project. Usage: Run this command to build kafka and zookeeper containers, and c

Nouamane Tazi 5 Jan 12, 2022