Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Overview

Tweetmetric

Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile.

example image

The code is in Python, and the frontend uses Dash (a Plotly web interface). Tweetmetric uses Redis as a fast database

Install and run

Installation

Run the following commands to install the project and start Redis:

pip install redis dash pandas tweepy pytz
sudo apt install redis
redis-server

If you want the database to be persistent after reboots, enable Redis AOF by adding appendonly yes to your Redis configuration file (usually in /etc/redis/redis.conf)

Getting Twitter tokens

Tweetmetric uses private metrics that can only be accessed by the Tweet's owner. You need to provide your API keys to the program so it can work.

  • Request a Twitter API key on The Twitter developer portal. This only takes a couple minutes, you need to have a verified phone number on your account.
  • Generate a user token for the app you just created on the developer dashboard
  • You should now have 5 secrets provided by Twitter. Store them in their corresponding strings inside api_secrets.py

Start

Your environment should be ready now. To run the server in background :

./launch.sh

This command displays server logs, but exiting with Ctrl-C will not kill the server.

You might also like...
Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer
Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Validation and inference over LinkML instance data using souffle

Translates LinkML schemas into Datalog programs and executes them using Souffle, enabling advanced validation and inference over instance data

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Udacity-api-reporting-pipeline - Udacity api reporting pipeline

udacity-api-reporting-pipeline In this exercise, you'll use portions of each of

Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

 Integrate bus data from a variety of sources (batch processing and real time processing).
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Comments
  • Distribution sous forme de conteneurs docker ready-to-use

    Distribution sous forme de conteneurs docker ready-to-use

    But de la PR:

    1/ Refacto des sources pour les rendres plus "cloud native friendly": avoir des conteneurs docker agnostiques qui sont configurables via des variables d'environnement et qui redirigent les outputs vers stdin/stderr. La partie http à rendre configurable aussi pour être facilement hébergé. Issue #3

    2/ Le repo a été forké sur un gitlab afin de faciliter la mise en place de pipelines qui livrent sur dockerhub: https://gitlab.comwork.io/oss/Tweetmetric

    Les images livrées sur docker-hub automatiquement à chaque changement:

    3/ Ajout d'un requirements.txt pour faciliter l'installation des dépendances (faudrait fixer les numéros de versions pour être encore plus prévenant sur les breaking changes). Issue #2

    4/ Ajout d'une proposition de license opensource (Apache 2.0)

    Future proposition: des images ARMhf pour raspberrypi

    opened by idrissneumann 0
  • Suggestion: use environment variables for the configurations values

    Suggestion: use environment variables for the configurations values

    The following values in the api_secrets.py file:

    API_KEY = 'YOUR TOKEN HERE'
    API_KEY_SECRET = 'YOUR TOKEN HERE'
    BEARER_TOKEN = 'YOUR TOKEN HERE'
    USER_ACCESS_TOKEN = 'YOUR TOKEN HERE'
    USER_ACCESS_TOKEN_SECRET = 'YOUR TOKEN HERE'
    

    should be replaced by environments variables (with the module os), pretty easy, you can replace your code as follow:

    import os
    
    API_KEY=os.environ['API_KEY']
    // ...
    

    As for the #2 issue, it will make the dockerization easier. At the end, you'll be able to provide an immutable OCI image with all the dependancies already built and ready to run inside. The users'll just have to export the environment variables (or create a .env file for their docker-compose) before running the container.

    opened by idrissneumann 0
  • Suggestion: requirements.txt

    Suggestion: requirements.txt

    Suggestion: adding the dependancies in a requirements.txt with fixed versions to ensure the future breaking changes.

    It would also be great to provide a docker image (and the requirements.txt will make easier the updates on this image build).

    BTW thanks for your work!

    opened by idrissneumann 0
Owner
Mathis HAMMEL
Competitive programmer, CTF player
Mathis HAMMEL
A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms

MatrixProfile MatrixProfile is a Python 3 library, brought to you by the Matrix Profile Foundation, for mining time series data. The Matrix Profile is

Matrix Profile Foundation 302 Dec 29, 2022
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 8, 2022
Reading streams of Twitter data, save them to Kafka, then process with Kafka Stream API and Spark Streaming

Using Streaming Twitter Data with Kafka and Spark Reading streams of Twitter data, publishing them to Kafka topic, process message using Kafka Stream

Rustam Zokirov 1 Dec 6, 2021
Uses MIT/MEDSL, New York Times, and US Census datasources to analyze per-county COVID-19 deaths.

Covid County Executive summary Setup Install miniconda, then in the command line, run conda create -n covid-county conda activate covid-county conda i

Ahmed Fasih 1 Dec 22, 2021
follow-analyzer helps GitHub users analyze their following and followers relationship

follow-analyzer follow-analyzer helps GitHub users analyze their following and followers relationship by providing a report in html format which conta

Yin-Chiuan Chen 2 May 2, 2022
Repository created with LinkedIn profile analysis project done

EN/en Repository created with LinkedIn profile analysis project done. The datase

Mayara Canaver 4 Aug 6, 2022
Data collection, enhancement, and metrics calculation.

l3_data_collection Data collection, enhancement, and metrics calculation. Summary Repository containing code for QuantDAO's JDT data collection task.

Ruiwyn 3 Dec 23, 2022
The repo for mlbtradetrees.com. Analyze any trade in baseball history!

The repo for mlbtradetrees.com. Analyze any trade in baseball history!

null 7 Nov 20, 2022
MDAnalysis is a Python library to analyze molecular dynamics simulations.

MDAnalysis Repository README [*] MDAnalysis is a Python library for the analysis of computer simulations of many-body systems at the molecular scale,

MDAnalysis 933 Dec 28, 2022