First Party data integration solution built for marketing teams to enable audience and conversion onboarding into Google Marketing products (Google Ads, Campaign Manager, Google Analytics).

Overview

Megalista

Sample integration code for onboarding offline/CRM data from BigQuery as custom audiences or offline conversions in Google Ads, Google Analytics 360, Google Display & Video 360 and Google Campaign Manager.

Disclaimer: This is not an officially supported Google product.

Supported integrations

  • Google Ads

    • Contact Info Customer Match (email, phone, address) [details]
    • Id Based Customer Match (device Id, user id)
    • Offline Conversions through gclid [details]
    • Store Sales Direct (SSD) conversions [details]
  • Google Analytics (Universal analytics)

  • Campaign Manager

    • Offline Conversions API (user id, device id, match id, gclid, dclid) [details]
  • Google Analytics 4

  • Appsflyer

    • S2S Offline events API (conversion upload), to be used for audience creation and in-app events with Google Ads and DV360 [details]

How does it work

Megalista was design to separate the configuration of conversion/audience upload rules from the engine, giving more freedom for non-technical teams (i.e. Media and Business Inteligence) to setup multiple upload rules on their own.

The solution consists in #1 a Google Spreadsheet (template) in which all rules are defined by mapping a data source (BigQuery Table) to a destination (data upload endpoint) and #2, an apache beam workflow running on Google Dataflow, scheduled to upload the data in batch mode.

Prerequisites

Google Cloud Services

  • Google Cloud Platform account
    • Billing enabled
    • BigQuery enabled
    • Dataflow enabled
    • Cloud storage enabled
    • Cloud scheduler enabled
  • At least one of:
    • Google Ads API Access
    • Campaign Manager API Access
    • Google Analytics API Access
  • Python3
  • Google Cloud SDK

Access Requirements

Those are the minimum roles necessary to deploy Megalista:

  • OAuth Config Editor
  • BigQuery User
  • BigQuery Job User
  • BigQuery Data Viewer
  • Cloud Scheduler Admin
  • Storage Admin
  • Dataflow Admin
  • Service Account Admin
  • Logs Viewer
  • Service Consumer

APIs

Required APIs will depend on upload endpoints in use. We recomend you to enable all of them:

  • Google Sheets (required for any use case) [link]
  • Google Analytics [link]
  • Google Analytics Reporting [link]
  • Google Ads [link]
  • Campaign Manager [link]

Installation

Create a copy of the configuration Spreadsheet

WIP

Creating required access tokens

To access campaigns and user lists on Google's platforms, this dataflow will need OAuth tokens for a account that can authenticate in those systems.

In order to create it, follow these steps:

  • Access GCP console
  • Go to the API & Services section on the top-left menu.
  • On the OAuth Consent Screen and configure an Application name
  • Then, go to the Credentials and create an OAuth client Id with Application type set as Desktop App
  • This will generate a Client Id and a Client secret
  • Run the generate_megalist_token.sh script in this folder providing these two values and follow the instructions
    • Sample: ./generate_megalist_token.sh client_id client_secret
  • This will generate the Access Token and the Refresh token

Creating a bucket on Cloud Storage

This bucket will hold the deployed code for this solution. To create it, navigate to the Storage link on the top-left menu on GCP and click on Create bucket. You can use Regional location and Standard data type for this bucket.

Running Megalista

We recommend first running it locally and make sure that everything works. Make some sample tables on BigQuery for one of the uploaders and make sure that the data is getting correctly to the destination. After that is done, upload the Dataflow template to GCP and try running it manually via the UI to make sure it works. Lastly, configure the Cloud Scheduler to run Megalista in the frequency desired and you'll have a fully functional data integration pipeline.

Running locally

python3 megalist_dataflow/main.py \
  --runner DirectRunner \
  --developer_token ${GOOGLE_ADS_DEVELOPER_TOKEN} \
  --setup_sheet_id ${CONFIGURATION_SHEET_ID} \
  --refresh_token ${REFRESH_TOKEN} \
  --access_token ${ACCESS_TOKEN} \
  --client_id ${CLIENT_ID} \
  --client_secret ${CLIENT_SECRET} \
  --project ${GCP_PROJECT_ID} \
  --region us-central1 \
  --temp_location gs://{$GCS_BUCKET}/tmp

Deploying Pipeline

To deploy, use the following command: ./deploy_cloud.sh project_id bucket_name region_name

Manually executing pipeline using Dataflow UI

To execute the pipeline, use the following steps:

  • Go to Dataflow on GCP console
  • Click on Create job from template
  • On the template selection dropdown, select Custom template
  • Find the megalist file on the bucket you've created, on the templates folder
  • Fill in the parameters required and execute

Scheduling pipeline

To schedule daily/hourly runs, go to Cloud Scheduler:

Creating a Service Account

It's recommended to create a new Service Account to be used with the Cloud Scheduler

  • Go to IAM & Admin > Service Accounts
  • Create a new Service Account with the following roles:
    • Cloud Dataflow Service Agent
    • Dataflow Admin
    • Storage Objects Viewer

Usage

Every upload method expects as source a BigQuery data with specific fields, in addition to specific configuration metadata. For details on how to setup your upload routines, refer to the Megalista Wiki or the Megalista user guide.

Issues
  • Add Firestore source

    Add Firestore source

    Hello. I've implemented a Firestore source, which is meant to work as an alternative for Sheets for parametrization purposes.

    • Why Firestore? Some of our clients are unable to access the Spreadsheets domain for security purposes, and Firestore proved to be a great option. It provides reliable and dynamic storage, and is quite simple to use. Also, the expected usage level for Megalist should fall into the free tier.

    Additionally, Firestore has great integration with App Engine. In a future PR, I’d like to add a highly customizable App Engine form integrated with Firestore, which provides an easy to use alternative to Sheets, especially for non-technical users unable to access it.

    • Requirements: For now, Firestore usage requires a GCP project with native Firestore mode.

    • Usage: The default fields for any upload type are: active (yes/no), bq_dataset, bq_table, source and type. Valid upload types and their required fields can be seen in the firestore_execution_source file.

    As with Sheets, account IDs are included separately. In this case, in a Firestore document called account_config, within the same collection. In other words, the hierarchy is: Firestore collection -> document entries for each schedule + account_config document.

    In order to check Firestore, Megalist requires the setup_firestore_collection command line parameter. If setup_sheet_id is provided, Sheets will be used instead.

    • Improvement opportunities:
    1. For now, the Firestore source expects BigQuery parameters, as it is the only ingest option currently available. This should be made flexible in the future, to allow options such as GCS.

    2. The list of parameter metadata was included in firestore_execution_source, and could be modularized in the future.

    • Testing I have only been able to test uploads to Google Ads and Google Analytics so far, as we generally lack access/test data to other platforms. Help with further testing would be greatly appreciated.
    opened by nivaldoh 10
  • Add partial failure support for Google Ads

    Add partial failure support for Google Ads

    Hi, we've added support for partial failures in Google Ads conversion uploads. By default, if a single row contains errors, the entire batch is blocked. This change aims to allow any valid rows to be uploaded, regardless of errors in other rows in the same batch. For more information: https://developers.google.com/adwords/api/docs/guides/partial-failure

    Topics for future consideration:

    1. We could make this optional, if needed.
    2. Megalist currently shows only the amount of rows that reached the uploader. We intend to contribute again soon with changes that display the number of rows that were, in fact, accepted by the API, as well as logs that register invalid rows individually and the reason for their rejections.
    opened by nivaldoh 10
  • Fixed error when saving the uploaded data to a BigQuery table

    Fixed error when saving the uploaded data to a BigQuery table

    The process fails to create a BigQuery table with the uploaded data:

    table_name = self._bq_ops_dataset.get() + '.' + execution.source.source_metadata[1] + "_uploaded"
    TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' 
    

    The approach as to change it to execution.source.source_metadata[0] instead of self._bq_ops_dataset.get(), which is returning None instead of the Dataset name.

    opened by gabrielmpaula 7
  • Update README.md

    Update README.md

    Added note that App Engine application needs to be created.

    opened by blevitan516 5
  • Documentation mismatch in Google Ads Customer Match Device ID schema

    Documentation mismatch in Google Ads Customer Match Device ID schema

    According to the documentation, the expected schema for Google Ads Customer Match Device ID table is | Column name | Type | Description | Requirement | | :---: | :---: | :---: | :---: | | mobile_device_id | STRING | Mobile device Id identifier (android AdId or IOS IDFA) | required |

    But in the source code, the field is mobileId.

    def get_row_keys(self) -> List[str]:
        return ['mobileId']
    
    opened by xfer 2
  • Recode change and reformatted

    Recode change and reformatted

    recode change and reformatted code contain:

    • [x] reformatted code to easily readable code maintenance
    • [x] change %s to fstring for more readable and less error
    • [x] passed local test flake8
    opened by slowy07 2
  • Cm offline api additional fields

    Cm offline api additional fields

    Hi Álvaro,

    Please review the additional fields I've added for the CM Offline API.

    Regards, Brian

    opened by blevitan516 2
  • Question: project name is megalist or megalista?

    Question: project name is megalist or megalista?

    I have a doubt, @astivi and @caiotomazelli

    The name of the repository and documentation is Megalista But the folder structure uses the name megalist and some parameters as well.

    We understand that the name of the solution is Megalista, and all coding must use the megalist nomenclature. Is correct?

    opened by joaquimsn 2
  • Added uid/cid logic to measurement protocol

    Added uid/cid logic to measurement protocol

    Using list comprehension, added a logic to accept both cid/uid to measurement protocol hits. Also added acceptance for custom metrics, and simplified the code, instead of using lambda.

    opened by Hatuna 2
  • Removed App Engine dependency from Cloud Scheduler

    Removed App Engine dependency from Cloud Scheduler

    Changed google_cloud_scheduler_job region to "Etc/UTC" to avoid the App Engine app requirement error that gets thrown if an App Engine app isn't created before running ./terraform_deploy.sh.

    This is a workaround for a bug in Terraform. See more background here.

    I will change this back to var.region once the terraform bug is fixed.

    opened by blevitan516 1
  • Customer Match Upload with login_customer_id

    Customer Match Upload with login_customer_id

    Change to take the AccountConfig Customer Id to be used as the login_customer_id for the gAds API Requests. It takes each Audience's Metadata 5 (Account) if exists for the customer_id value in requests. If this Metadata does not exist, it takes also the AccountConfig Customer Id for the customer_id

    This allows to upload audiences to non-MCC accounts where the manager account needs to be passed in the login_customer_id in the new gAds API

    opened by alvarolamas10 0
  • Create blank.ymlblank.yml

    Create blank.ymlblank.yml

    null

    opened by mloonrach 0
  • Add REPLACE option in Customer Match for Google Ads

    Add REPLACE option in Customer Match for Google Ads

    Add a new option in Customer Match for Google Ads called REPLACE (METADATA2). With this option the users will be able to replace the whole Customer Match list.

    opened by diogoaihara 0
Releases(v4.2)
Owner
Google
Google ❤️ Open Source
Google
Leakvertise is a Python open-source project which aims to bypass these fucking annoying captchas and ads from linkvertise, easily

Leakvertise Leakvertise is a Python open-source project which aims to bypass these fucking annoying captchas and ads from linkvertise, easily. You can

Quatrecentquatre 1 Dec 2, 2021
Recommendation systems are among most widely preffered marketing strategies.

Recommendation systems are among most widely preffered marketing strategies. Their popularity comes from close prediction scores obtained from relationships of users and items. In this project, two recommendation systems are used for two different datasets: Association Recommendation Learning and Collaborative Filtering. Please read the description for more info.

Sübeyte 8 Oct 6, 2021
The world's first public V2ray manager Telegram bot

?? DarkV2ray-Manager-Bot 0.1 UPDATE 11/11/2021 Telegram bot v2ray Test user expired date data limit paylode && sni usage user on/off heroku bot hostin

@Dk_king_offcial 1 Nov 11, 2021
🔪 Block replies to viral tweets from users getting paid to promote useless products

This Tweet Took Off Ublock Origin filter list targeting long reply chains posted by twitter users who get paid to promote random products on viral twe

Xetera 11 Nov 25, 2021
This is a simple grabber written in Python which helps you to grab products from Willhaben.at

Willhaben Grabber This is a simple grabber written in Python which helps you to grab products from Willhaben.at General info The tool generates a sear

Ramo 7 Nov 17, 2021
A chatbot that helps you set price alerts for your amazon products.

Amazon Price Alert Bot Description A Telegram chatbot that helps you set price alerts for amazon products. The bot checks the price of your watchliste

Rittik Basu 7 Nov 6, 2021
Monitor robot of Apple Store's products, using DingTalk notification.

概述 本项目应用主要用来监测Apple Store线下直营店货源情况,主要使用Python实现。 首先感谢iPhone-Pickup-Monitor项目带来的灵感,同时有些实现也直接使用了该项目的一些代码。 本项目在iPhone-Pickup-Monitor原有功能的基础上去掉了声音通知,但添加了多

Lennon Chin 97 Nov 23, 2021
This app is providing you to track some online products' prices via GMAIL.

Price Tracking App variables and descriptions of that code is in Turkish language. but we're working on translate them into English. This app is provi

Abdullah Aslan 1 Oct 24, 2021
This is a simple bot that can be used to upload images to a third-party cloud (image hosting). Currently, only the imgbb.com website supports the bot. I Will do future updates

TGImageHosting This is a simple bot that can be used to upload images to a third party cloud (image hosting). Currently, only the imgbb.com website su

Abhijith N T 3 Oct 13, 2021
SMAM2 is a package manager built specifically for SourceMod.

SourceMod Addon Manager 2 (SMAM2) SMAM2 is a package manager built specifically for SourceMod. This was heavily inspired by Phil25's SMAM. I thought t

John Mascagni 2 Nov 4, 2021
Simple stock price analytics

mune · Mune is an open source python web application built to analyze stocks, named after Homma Munehisa. Currently, the forecasting component is powe

Richard Hong 14 Aug 30, 2021
Herramienta para transferir eventos de Sucuri WAF hacia Azure Monitor Log Analytics.

Transfiere eventos de Sucuri hacia Azure LogAnalytics Script para transferir eventos del Sucuri Web Application Firewall (WAF) hacia Azure LogAnalytic

CSIRT-RD 1 Dec 1, 2021
ClearML - Auto-Magical Suite of tools to streamline your ML workflow. Experiment Manager, MLOps and Data-Management

ClearML - Auto-Magical Suite of tools to streamline your ML workflow Experiment Manager, MLOps and Data-Management ClearML Formerly known as Allegro T

ClearML 2.8k Dec 1, 2021
This solution helps you deploy Data Lake Infrastructure on AWS using CDK Pipelines.

CDK Pipelines for Data Lake Infrastructure Deployment This solution helps you deploy data lake infrastructure on AWS using CDK Pipelines. This is base

AWS Samples 40 Nov 24, 2021
domhttpx is a google search engine dorker with HTTP toolkit built with python, can make it easier for you to find many URLs/IPs at once with fast time.

domhttpx is a google search engine dorker with HTTP toolkit built with python, can make it easier for you to find many URLs/IPs at once with fast time

Naufal Ardhani 54 Nov 19, 2021
Changes the Telegram bio, profile picture, first and last name to the song that the user is currently listening to.

TGBIOFY - Telegram & Spotify integration Changes the Telegram bio, profile picture, first and last name to the song that the user is currently listeni

elpideus 20 Oct 27, 2021
🎀 First and most powerfull open source clicktune botter

CTB ?? Follow me here: Discord | YouTube | Twitter | Github ?? Features: /* *- The first *- Fast *- Proxy support: http/s, socks4/5, premieum (w

Iтѕ_Ѵιcнч#1337 25 Nov 19, 2021
And now, for the first time, you can send alerts via action from ArcSight ESM Console to the TheHive when Correlation Rules are triggered.

ArcSight Integration with TheHive And now, for the first time, you can send alerts via action from ArcSight ESM Console to the TheHive when Correlatio

Amir Hossein Zargaran 2 Oct 13, 2021
💀 The first raid tool of its kind. Inject Deadcord and raid servers directly from the Discord client.

?? Deadcord The next upcoming Discord raid tool, the best for free. ?? Early Beta Released We have released an early version of Deadcord, please keep

Galaxzy 50 Nov 22, 2021