Push Prometheus metrics to VictoriaMetrics or other exporters

Overview

test-all codecov

prometheus-push-client

Push metrics from your periodic long-running jobs to existing Prometheus/VictoriaMetrics monitoring system.

Currently supports pushes directly to VictoriaMetrics:

For pure Prometheus setups, several options are supported:

  • to pushgateway or prom-aggregation-gateway in OpenMetrics format via HTTP. Please read corresponding docs about appropriate use cases and limitations;
  • to StatsD or statsd-exporter in StatsD format via UDP. Prometheus and StatsD metric types are not fully compatible, so currenly all metrics become StatsD gauges, but rate, increase, histogram_quantile and other PromQL functions produce same results as if types never changed.

Install it via pip:

pip install prometheus-push-client

Metrics

This library uses prometheus-client metric implementation, but adds some minor tweaks.

Separate registry

New metric constructors use separate PUSH_REGISTRY as a default, not to interfere with other metrics already defined and monitored in existing projects.

Default labelvalues

With regular prometheus_client, defaults may be defined for either none or all the labels (with labelvalues), but that's not enough. Moreover labelvalues sometimes doesn't work as expected.

We probably want to define some defaults, like hostname, or more importantly, if we use VictoriaMetrics cluster, VictoriaMetrics_AccountID= (try 0 as a default) label must always be set, and metrics without it will be ignored.

Following example shows how to use defaults, and how to override them if necessary.

import prometheus_push_client as ppc


counter1 = ppc.Counter(
    name="c1",
    labelnames=["VictoriaMetrics_AccountID", "host", "event_type"],
    default_labelvalues={
        "VictoriaMetrics_AccountID": 0,
        "host": socket.gethostname(),
    }
)


# regular usage
counter1.labels(event_type="login").inc()

# overriding defaults
counter1.labels(host="non-default", event_type="login").inc()
# same effect as above: defaults are applied in `labelvalues`
# order for "missing" labels in the beginning
counter1.labels("non-default", "login").inc()

Metrics with no labels are initialized at creation time. This can have unpleasant side-effect: if we initialize lots of metrics not used in currently running job, batch clients will have to push their non-changing values in every synchronization session.

To avoid that we'll have to properly isolate each task's metrics, which can be impossible or rather tricky, or we can create metrics with default, non-changing labels (like hostname). Such metrics will be initialized on first use (inc), and we'll be pushing only those we actually utilized.

Clients

Batch clients

Batch clients spawn synchronization jobs "in background" (meaning in a thread or asyncio task) to periodically send all metrics from ppc.PUSH_REGISTRY to the destination.

Clients will attempt to stop gracefully, synchronizing registry "one last time" after job exits or crashes. Sometimes this may mess up sampling, but the worst case I could artifically create looks like this:

graceful push effect

Best way to use them is via decorators / context managers. These clients are intended to be used with long running, but finite tasks, which could be spawned anywhere, therefor not easily accessible by the scraper. If that's not the case -- just use "passive mode" w/ the scraper instead.

def influx_udp_async(host, port, period=15.0):
def influx_udp_thread(host, port, period=15.0):
def statsd_udp_async(host, port, period=15.0):
def statsd_udp_thread(host, port, period=15.0):
def influx_http_async(url, verb="POST", period=15.0):
def influx_http_thread(url, verb="POST", period=15.0):
def openmetrics_http_async(url, verb="POST", period=15.0):
def openmetrics_http_thread(url, verb="POST", period=15.0):

Usage example:

import prometheus_push_client as ppc


req_hist = ppc.Histogram(
    name="external_requests",
    namespace="acme"
    subsystem="job123",
    unit="seconds",
    labelnames=["service"]
)


@ppc.influx_udp_async("victoria.acme.inc.net", 9876, period=15)
async def main(urls):
    # the job ...
    req_hist.labels(gethostname(url)).observe(response.elapsed)

# OR

async def main(urls):
    async with ppc.influx_udp_async("victoria.acme.inc.net", 9876, period=15):
        # the job ...
        req_hist.labels(gethostname(url)).observe(response.elapsed)

Please read about mandatory job tag within url while using pushgateway.

Streaming clients

If for some reason every metric change needs to be synced, UDP streaming clients are implemented in this library.

def influx_udp_aiostream(host, port):
def influx_udp_stream(host, port):
def statsd_udp_aiostream(host, port):
def statsd_udp_stream(host, port):

Usage is completely identical to batch clients' decorators / context managers.

⚠️ Histogram and Summary .time() decorator doesn't work in this mode atm, because it can't be monkey-patched easily.

Transports

Main goal is not to interrupt measured jobs with errors from monitoring code. Therefor all transports will attempt to catch all network errors, logging error info and corresponding tracebacks to stdout.

You might also like...
Google Foobar challenge solutions from my experience and other's on the web.
Google Foobar challenge solutions from my experience and other's on the web.

Google Foobar challenge Google Foobar challenge solutions from my experience and other's on the web. Note: Problems indicated with "Mine" are tested a

Custom component to calculate estimated power consumption of lights and other appliances
Custom component to calculate estimated power consumption of lights and other appliances

Custom component to calculate estimated power consumption of lights and other appliances. Provides easy configuration to get virtual power consumption sensors in Home Assistant for all your devices which don't have a build in power meter.

The tool helps to find hidden parameters that can be vulnerable or can reveal interesting functionality that other hunters miss.
The tool helps to find hidden parameters that can be vulnerable or can reveal interesting functionality that other hunters miss.

The tool helps to find hidden parameters that can be vulnerable or can reveal interesting functionality that other hunters miss. Greater accuracy is achieved thanks to the line-by-line comparison of pages, comparison of response code and reflections.

Emulate and Dissect MSF and *other* attacks
Emulate and Dissect MSF and *other* attacks

Need help in analyzing Windows shellcode or attack coming from Metasploit Framework or Cobalt Strike (or may be also other malicious or obfuscated code)? Do you need to automate tasks with simple scripting? Do you want help to decrypt MSF generated traffic by extracting keys from payloads?

External Network Pentest Automation using Shodan API and other tools.

Chopin External Network Pentest Automation using Shodan API and other tools. Workflow Input a file containing CIDR ranges. Converts CIDR ranges to ind

to learn how to do pull request and do contribution to other's repo
to learn how to do pull request and do contribution to other's repo

Hacktoberfest-2021 - open-source-contribution An Open Source repository to Teach people How to contribute to open sources. 💥 🔥 JOIN PVX PROGRAMMING

python scripts and other files to generate induction encoder PCBs in Kicad
python scripts and other files to generate induction encoder PCBs in Kicad

induction_encoder python scripts and other files to generate induction encoder PCBs in Kicad Targeting the Renesas IPS2200 encoder chips.

scap is a tool for putting code in places and for other purposes

Scap is the deployment script used by Wikimedia Foundation to publish code and configuration on production web servers.

Python script for changing the SSH banner content with other content

Banner-changer-py Python script for changing the SSH banner content with other content. The Script will take the content of a specified file range and

Comments
  • Can I use this to push historical data?

    Can I use this to push historical data?

    I came here from https://stackoverflow.com/a/67562080/7424510.

    Suppose, I've got a temperature sensor, and I've got a list of UNIX time stamps and temperatures, like so:

    [
        (1661863713, 22),
        (1661863714, 21),
        (1661863715, 22)
    ]
    

    Can I push this with this client? How?

    opened by joernschellhaas 1
  • http fixes, pushgateway client

    http fixes, pushgateway client

    • http: ensure data ends with "\n\n"
    • openmetrics: data type metric header
    • added openmetrics http clients for pushgateway / prom-aggregation-gateway
    opened by gistart 1
Releases(0.0.8)
Owner
olegm
olegm
Send notifications created in Frappe or ERPNext as push notication via Firebase Cloud Message(FCM)

FCM Notification for ERPNext Send notifications created in Frappe or ERPNext as push notication via Firebase Cloud Message(FCM) Steps to use the app:

Tridz 9 Nov 14, 2022
Provide Prometheus url_sd compatible API Endpoint with data from Netbox

netbox-plugin-prometheus-sd Provide Prometheus http_sd compatible API Endpoint with data from Netbox. HTTP SD is a new feature in Prometheus and not a

Felix Peters 66 Dec 19, 2022
Prometheus exporter for Spigot accounts

SpigotExporter Prometheus exporter for Spigot accounts What it provides SpigotExporter will output metrics for each of your plugins and a cumulative d

Jacob Bashista 5 Dec 20, 2021
Import some key/value data to Prometheus custom-built Node Exporter in Python

About the app In one particilar project, i had to import some key/value data to Prometheus. So i have decided to create my custom-built Node Exporter

Hamid Hosseinzadeh 1 May 19, 2022
Prometheus exporter for chess.com player data

chess-exporter Prometheus exporter for chess.com player data implemented via chess.com's published data API and Prometheus Python Client Example use c

Mário Uhrík 7 Feb 28, 2022
Block fingerprinting for the beacon chain, for client identification & client diversity metrics

blockprint This is a repository for discussion and development of tools for Ethereum block fingerprinting. The primary aim is to measure beacon chain

Sigma Prime 49 Dec 8, 2022
fast_bss_eval is a fast implementation of the bss_eval metrics for the evaluation of blind source separation.

fast_bss_eval Do you have a zillion BSS audio files to process and it is taking days ? Is your simulation never ending ? Fear no more! fast_bss_eval i

Robin Scheibler 99 Dec 13, 2022
Script to quickly get the metrics from Github repos to analyze.

commit-prefix-analysis Script to quickly get the metrics from Github repos to analyze. Setup Install the Github CLI. You'll know its working when runn

David Carpenter 1 Dec 17, 2022
A Python application that helps users determine their calorie intake, and automatically generates customized weekly meal and workout plans based on metrics computed using their physical parameters

A Python application that helps users determine their calorie intake, and automatically generates customized weekly meal and workout plans based on metrics computed using their physical parameters

Anam Iqbal 1 Jan 13, 2022
Write complicated anonymous functions other than lambdas in Python.

lambdex allows you to write multi-line anonymous function expression (called a lambdex) in an idiomatic manner.

Xie Jingyi 71 May 19, 2022