Instrument your FastAPI app

Overview

Prometheus FastAPI Instrumentator

PyPI version Maintenance PyPI downloads docs

release commit CodeQL codecov Code style: black

A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator from PyPI. Here is the fast track to get started with a preconfigured instrumentator:

from prometheus_fastapi_instrumentator import Instrumentator

Instrumentator().instrument(app).expose(app)

With this, your FastAPI is instrumented and metrics ready to be scraped. The defaults give you:

  • Counter http_requests_total with handler, status and method. Total number of requests.
  • Summary http_request_size_bytes with handler. Added up total of the content lengths of all incoming requests.
  • Summary http_response_size_bytes with handler. Added up total of the content lengths of all outgoing responses.
  • Histogram http_request_duration_seconds with handler. Only a few buckets to keep cardinality low.
  • Histogram http_request_duration_highr_seconds without any labels. Large number of buckets (>20).

In addition, following behaviour is active:

  • Status codes are grouped into 2xx, 3xx and so on.
  • Requests without a matching template are grouped into the handler none.

If one of these presets does not suit your needs you can multiple things:

  • Pick one of the already existing closures from metrics and pass it to the instrumentator instance. See here how to do that.
  • Create your own instrumentation function that you can pass to an instrumentator instance. See here to learn how more.
  • Don't use this package at all and just use the sorce code as inspiration on how to instrument your FastAPI.

Important: This package is not made for generic Prometheus instrumentation in Python. Use the Prometheus client library for that. This packages uses it as well.

Table of Contents

Features

Beyond the fast track, this instrumentator is highly configurable and it is very easy to customize and adapt to your specific use case. Here is a list of some of these options you may opt-in to:

  • Regex patterns to ignore certain routes.
  • Completely ignore untemplated routes.
  • Control instrumentation and exposition with an env var.
  • Rounding of latencies to a certain decimal number.
  • Renaming of labels and the metric.
  • Metrics endpoint can compress data with gzip.
  • Opt-in metric to monitor the number of requests in progress.

It also features a modular approach to metrics that should instrument all FastAPI endpoints. You can either choose from a set of already existing metrics or create your own. And every metric function by itself can be configured as well. You can see ready to use metrics here.

Advanced Usage

This chapter contains an example on the advanced usage of the Prometheus FastAPI Instrumentator to showcase most of it's features. Fore more concrete info check out the automatically generated documentation.

Creating the Instrumentator

We start by creating an instance of the Instrumentator. Notice the additional metrics import. This will come in handy later.

from prometheus_fastapi_instrumentator import Instrumentator, metrics

instrumentator = Instrumentator(
    should_group_status_codes=False,
    should_ignore_untemplated=True,
    should_respect_env_var=True,
    should_instrument_requests_inprogress=True,
    excluded_handlers=[".*admin.*", "/metrics"],
    env_var_name="ENABLE_METRICS",
    inprogress_name="inprogress",
    inprogress_labels=True,
)

Unlike in the fast track example, now the instrumentation and exposition will only take place if the environment variable ENABLE_METRICS is true at run-time. This can be helpful in larger deployments with multiple services depending on the same base FastAPI.

Adding metrics

Let's say we also want to instrument the size of requests and responses. For this we use the add() method. This method does nothing more than taking a function and adding it to a list. Then during run-time every time FastAPI handles a request all functions in this list will be called while giving them a single argument that stores useful information like the request and response objects. If no add() at all is used, the default metric gets added in the background. This is what happens in the fast track example.

All instrumentation functions are stored as closures in the metrics module. Fore more concrete info check out the automatically generated documentation.

Closures come in handy here because it allows us to configure the functions within.

instrumentator.add(metrics.latency(buckets=(1, 2, 3,)))

This simply adds the metric you also get in the fast track example with a modified buckets argument. But we would also like to record the size of all requests and responses.

instrumentator.add(
    metrics.request_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="a",
        metric_subsystem="b",
    )
).add(
    metrics.response_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="namespace",
        metric_subsystem="subsystem",
    )
)

You can add as many metrics you like to the instrumentator.

Creating new metrics

As already mentioned, it is possible to create custom functions to pass on to add(). This is also how the default metrics are implemented. The documentation and code here is helpful to get an overview.

The basic idea is that the instrumentator creates an info object that contains everything necessary for instrumentation based on the configuration of the instrumentator. This includes the raw request and response objects but also the modified handler, grouped status code and duration. Next, all registered instrumentation functions are called. They get info as their single argument.

Let's say we want to count the number of times a certain language has been requested.

from typing import Callable
from prometheus_fastapi_instrumentator.metrics import Info
from prometheus_client import Counter

def http_requested_languages_total() -> Callable[[Info], None]:
    METRIC = Counter(
        "http_requested_languages_total", 
        "Number of times a certain language has been requested.", 
        labelnames=("langs",)
    )

    def instrumentation(info: Info) -> None:
        langs = set()
        lang_str = info.request.headers["Accept-Language"]
        for element in lang_str.split(",")
            element = element.split(";")[0].strip().lower()
            langs.add(element)
        for language in langs:
            METRIC.labels(language).inc()

    return instrumentation

The function http_requested_languages_total is used for persistent elements that are stored between all instrumentation executions (for example the metric instance itself). Next comes the closure. This function must adhere to the shown interface. It will always get an Info object that contains the request, response and a few other modified informations. For example the (grouped) status code or the handler. Finally, the closure is returned.

Important: The response object inside info can either be the response object or None. In addition, errors thrown in the handler are not caught by the instrumentator. I recommend to check the documentation and/or the source code before creating your own metrics.

To use it, we hand over the closure to the instrumentator object.

instrumentator.add(http_requested_languages_total())

Perform instrumentation

Up to this point, the FastAPI has not been touched at all. Everything has been stored in the instrumentator only. To actually register the instrumentation with FastAPI, the instrument() method has to be called.

instrumentator.instrument(app)

Notice that this will do nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Exposing endpoint

To expose an endpoint for the metrics either follow Prometheus Python Client and add the endpoint manually to the FastAPI or serve it on a separate server. You can also use the included expose method. It will add an endpoint to the given FastAPI. With should_gzip you can instruct the endpoint to compress the data as long as the client accepts gzip encoding. Prometheus for example does by default. Beware that network bandwith is often cheaper than CPU cycles.

instrumentator.expose(app, include_in_schema=False, should_gzip=True)

Notice that this will to nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Prerequesites

You can always check pyproject.toml for dependencies.

  • python = "^3.6" (tested with 3.6 and 3.9)
  • fastapi = ">=0.38.1, <=1.0.0" (tested with 0.38.1 and 0.61.0)

Development

Please refer to "DEVELOPMENT.md".

Comments
  • BrokenResourceError on metrics endpoint

    BrokenResourceError on metrics endpoint

    image image

    They mention here that it's related to MemoryStream middleware, but I'm not sure it's relevant. https://github.com/tiangolo/fastapi/issues/4041

    Versions: PFI: 5.8.2 FastAPI 0.78 Starlette 0.19.1

    Any ideas how to fix it?

    opened by mdczaplicki 16
  • refactor: change middleware implementation to pure asgi

    refactor: change middleware implementation to pure asgi

    Instead of a class solution, I went with a function layout (which makes more sense in this case). You can check how this is done in https://github.com/simonw/asgi-cors

    Closes #23

    References:

    • https://asgi.readthedocs.io/
    • https://github.com/encode/starlette/
    opened by Kludex 11
  • BaseHTTPMiddleware vs BackgroundTasks

    BaseHTTPMiddleware vs BackgroundTasks

    @trallnag I've just noticed that we use @app.middleware('http') here, I should have been able to catch this earlier... Anyway, that decorator is implemented on top of BaseHTTPMiddleware, which has a problem: https://github.com/encode/starlette/issues/919

    Solution: change the implementation to a pure ASGI app/middleware.

    PS.: I can open a PR with it, jfyk.

    opened by Kludex 9
  • Use the right type on Response for tests

    Use the right type on Response for tests

    Changes

    • Replace starlette.responses.Response by requests.Response on tests, as it's the right type returned by TestClient.
    • Replace middleware implementation from BaseHTTPMiddleware to pure ASGI.

    Can you approve the pipeline @trallnag ? cc @tiangolo - Some tests still failing atm.

    released 
    opened by Kludex 8
  • Adding FastAPI tags to metrics route

    Adding FastAPI tags to metrics route

    Is there a way to customize where the metrics route is tagged in the generated FastAPI docs? I'm using tags to group routes, but my instrumented routes ('/metrics') always ends up in "default".

    enhancement question 
    opened by chisaipete 7
  • Print in middleware component

    Print in middleware component

    In version 5.8.2 the PrometheusInstrumentatorMiddleware prints to stdout on every request, this should not be in the release:

    https://github.com/trallnag/prometheus-fastapi-instrumentator/blob/master/prometheus_fastapi_instrumentator/middleware.py#L80

    Workaround: Downgrade to 5.8.1

    opened by mattzque 5
  • actions: Support python 3.10

    actions: Support python 3.10

    Why

    • Currently python 3.10 is not officially supported and tested
    • python-version as number 3.10 is parsed as 3.1 image

    What

    • Run commit-tests with python 3.10
    • Change python-version to string as 3.10 would else be parsed as 3.1

    Additional info

    • Tested build in forked-repository https://github.com/Luke31/prometheus-fastapi-instrumentator/pull/1/files#diff-191bb5b4e97db48c9d0bdb945dd00e17b53249422f60a642e9e8d73250b5913aR7
      • Of course can't publish package because no permission to do so https://github.com/Luke31/prometheus-fastapi-instrumentator/runs/6171074586?check_suite_focus=true
    released 
    opened by Luke31 4
  • Example for

    Example for "manually" pushing a metric

    I'd love to see an example of how to "manually" submit a metric, something like:

    pseudocode:

    @app.get('/super-route')
    async def super-thing():
         business_result = await call_some_business_logic()
         metrics.push(business_result.count)
    

    it's difficult to see how to "interact" with the metrics dynamically in code without going through the request/response object.

    opened by trondhindenes 4
  • 🐛 Expose exceptions raised by other middlewares and app code

    🐛 Expose exceptions raised by other middlewares and app code

    🐛 Expose exceptions raised by other middlewares and app code

    It seems the traceback for other exceptions is currently hidden. This could fix/related to: https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/108

    It seems it could also be related to: https://github.com/encode/starlette/issues/1634

    opened by tiangolo 3
  • Unnecessary tight version constraint limits FastAPI versions

    Unnecessary tight version constraint limits FastAPI versions

    Due to this line in the pyproject.toml file:

    fastapi = "^0.38.1"
    

    FastAPI versions newer than 0.38 cannot be used with this (current version of FastAPI is 0.75.2). When explicitly requesting a higher version the version solving fails (using poetry):

    $ poetry update
    Updating dependencies
    Resolving dependencies... (0.0s)
    
      SolverProblemError
    
      Because prometheus-fastapi-instrumentator (5.8.0) depends on fastapi (>=0.38.1,<0.39.0)
       and no versions of prometheus-fastapi-instrumentator match >5.8.0,<6.0.0, prometheus-fastapi-instrumentator (>=5.8.0,<6.0.0) requires fastapi (>=0.38.1,<0.39.0).
      So, because my-repo depends on both fastapi (^0.75.0) and prometheus-fastapi-instrumentator (^5.8.0), version solving failed.
    

    One solution would be relaxing the requirements:

    fastapi = "^0.38"
    

    or

    fastapi = ">=0.38.1, <1.0.0"
    
    opened by graipher 3
  • http_requests_total is only available as a default metric

    http_requests_total is only available as a default metric

    Hello, I noticed that the default metrics contain the metric http_requests_total. As this metric is only defined inside the method default, it was necessary to create it as a custom metric:

    def http_requests_total(metric_namespace='', metric_subsystem='') -> Callable[[Info], None]:
        total = Counter(
            name="http_requests_total",
            documentation="Total number of requests by method, status and handler.",
            labelnames=(
                "method",
                "status",
                "handler",
            ),
            namespace=metric_namespace,
            subsystem=metric_subsystem,
        )
    
        def instrumentation(info: Info) -> None:
            total.labels(info.method, info.modified_status, info.modified_handler).inc()
    
        return instrumentation
    

    It would be great to have this metric available as a method like latency and response_size.

    Thanks!

    enhancement 
    opened by jpslopes 3
  • CPU and MEM metrics not available with multiworkers

    CPU and MEM metrics not available with multiworkers

    Hi,

    Following the issue #50, I was able to configure the right metrics when multi workers are in use in the system. However, I'm not able to have the metrics for CPU and memory. Do you know why?

    Thanks Matteo

    opened by Pazzeo 0
  • chore(master): release 5.9.2

    chore(master): release 5.9.2

    :robot: I have created a release beep boop

    5.9.2 (2022-12-18)

    Tests

    • Fix failures due to changes in httpx (1726297)
    • Replace deprecated httpx parameter (09aa996)

    CI/CD


    This PR was generated with Release Please. See documentation.

    autorelease: pending 
    opened by github-actions[bot] 0
  • feat: namespace and subsystem configuration.

    feat: namespace and subsystem configuration.

    • Accept namespace and subsystem parameters in instrument definition.

    Hi, I open this PR to be able to set the namespace and subsystem during instrumentation initialisation.

    Having this parameter is important for projects where several metrics endpoints are fetched and metrics are squashed together.

    This will allow us to have the same behaviour as prometheus-flask-instrumentator (when using the defaults_prefix argument of PrometheusMetrics).

    Thank you.

    opened by phbernardes 1
  • If status_code is HTTPStatus enumeration use value

    If status_code is HTTPStatus enumeration use value

    As mentioned in #190 the instrumentator has a bug when using the http.HTTPStatus enumeration for a status code response. If fixed this bug and added some tests to verify.

    I would be very thankful if you would add the HACKTOBERFEST-ACCEPTED to this pull request.

    opened by nikstuckenbrock 1
  • http request reposinse time includes brackground task's runtime

    http request reposinse time includes brackground task's runtime

    Maybe related to #20

    http_request_duration_highr_seconds_bucket seems to include in the http response time the runtime of the background tasks started in the request in question.

    Setup: python3.8 starlette 0.20.4 fastapi 0.85.0 prometheus-fastapi-instrumentator 5.9.1

    is this expected?

    opened by tonkolviktor 0
Releases(v5.9.1)
  • v5.9.1(Aug 23, 2022)

    5.9.1 (2022-08-23)

    🍀 Summary 🍀

    No bug fixes or new features. Just an important improvement of the documentation.

    ✨ Highlights ✨

    • Fix / Improve documentation of how to use package (#168). Instrumentation should happen in a function decorated with @app.on_event("startup") to prevent crashes on startup. Thanks to @mdczaplicki and others.

    CI/CD

    • Pin poetry version and improve caching configuration (6337459)

    Docs

    • Improve example in README on how to instrument app (#168) (dc36aac)
    Source code(tar.gz)
    Source code(zip)
  • v5.9.0(Aug 23, 2022)

    5.9.0 (2022-08-23)

    🍀 Summary 🍀

    This release fixes a small but annoying bug. Beyond that the release includes small internal improvements and bigger changes to CI/CD.

    ✨ Highlights ✨

    • Removed print statement polluting logs (#157). Thanks to all the people raising this issue and to @nikstuckenbrock for fixing it.
    • Added py.typed file to package to improve typing annotations (#137). Thanks to @mmaslowskicc for proposing and implementing this.
    • Changed license from MIT to ISC, which is just like MIT but shorter.
    • Migrated from Semantic Release to Release Please as release management tool.
    • Overall refactoring of project structure to match my (@trallnag) template Python repo.
    • Several improvements to the documentation. Thanks to @jabertuhin, @frodrigo, and @murphp15.
    • Coding style improvements (#155). Replaced a few for loops with list comprehensions. Defaulting an argument to None instead of an empty list. Thanks to @yezz123.

    Features

    • Add py.typed for enhanced typing annotations (#37) (0c67d1b)

    Bug Fixes

    • Remove print statement from middleware (#157) (f89792b)

    Build

    • deps-dev: bump devtools from 0.8.0 to 0.9.0 (#172) (24bb060)
    • deps-dev: bump flake8 from 4.0.1 to 5.0.4 (#179) (8f72053)
    • deps-dev: bump mypy from 0.950 to 0.971 (#174) (60e324f)

    Docs

    • Add missing colon to README (#33) (faef24c)
    • Adjust changelog formatting (b8b7b3e)
    • Fix small typo in readme (#154) (a569d4e)
    • Move docs-internal to docs/devel and adjust contributing (1b446ca)
    • Remove obsolete DEVELOPMENT.md (1c18ff7)
    • Switch license from MIT to ISC (1b0294a)

    CI/CD

    • Add .tool-versions (255ba97)
    • Add codecov.yaml (008ef61)
    • Add explicit codecov token (b264184)
    • Adjust commitlint to allow more subject case types (8b630aa)
    • Correct default branch name (5f141c5)
    • Improve and update scripts (e1d9982)
    • Move to Release Please and refactor overall CI approach (9977665)
    • Remove flake8 ignore W503 (6eab3b8)
    • Remove traces of semantic-release (f0ab8ff)
    • Remove unnecessary include of py.typed from pyproject.toml (#37) (bbad45e)
    • Rename poetry repo for TestPyPI (3f1c500)
    • Restructure poetry project layout (b439ceb)
    • Update gitignore (e0fa528)
    • Update pre-commit config (e725750)

    Refactor

    Source code(tar.gz)
    Source code(zip)
  • v5.8.2(Jun 12, 2022)

  • v5.8.1(May 3, 2022)

  • v5.8.0(May 1, 2022)

    5.8.0 (2022-05-01)

    ⚠ BREAKING CHANGES

    • Removed support for Python 3.6 and overall cleanup
    • dev: Switch from underscores to dashes for function names

    Features

    Code Refactoring

    • dev: Switch from underscores to dashes for function names (1dc0bb3)
    • remove support for python 3.6 and clean (363d353)
    Source code(tar.gz)
    Source code(zip)
Owner
Tim Schwenke
Tim Schwenke
Sample-fastapi - A sample app using Fastapi that you can deploy on App Platform

Getting Started We provide a sample app using Fastapi that you can deploy on App

Erhan BÜTE 2 Jan 17, 2022
Easily integrate socket.io with your FastAPI app 🚀

fastapi-socketio Easly integrate socket.io with your FastAPI app. Installation Install this plugin using pip: $ pip install fastapi-socketio Usage To

Srdjan Stankovic 210 Dec 23, 2022
Easily integrate socket.io with your FastAPI app 🚀

fastapi-socketio Easly integrate socket.io with your FastAPI app. Installation Install this plugin using pip: $ pip install fastapi-socketio Usage To

Srdjan Stankovic 37 Feb 12, 2021
:rocket: CLI tool for FastAPI. Generating new FastAPI projects & boilerplates made easy.

Project generator and manager for FastAPI. Source Code: View it on Github Features ?? Creates customizable project boilerplate. Creates customizable a

Yagiz Degirmenci 1k Jan 2, 2023
Simple FastAPI Example : Blog API using FastAPI : Beginner Friendly

fastapi_blog FastAPI : Simple Blog API with CRUD operation Steps to run the project: git clone https://github.com/mrAvi07/fastapi_blog.git cd fastapi-

Avinash Alanjkar 1 Oct 8, 2022
Пример использования GraphQL Ariadne с FastAPI и сравнение его с GraphQL Graphene FastAPI

FastAPI Ariadne Example Пример использования GraphQL Ariadne с FastAPI и сравнение его с GraphQL Graphene FastAPI - GitHub ###Запуск на локальном окру

ZeBrains Team 9 Nov 10, 2022
Flask-vs-FastAPI - Understanding Flask vs FastAPI Web Framework. A comparison of two different RestAPI frameworks.

Flask-vs-FastAPI Understanding Flask vs FastAPI Web Framework. A comparison of two different RestAPI frameworks. IntroductionIn Flask is a popular mic

Mithlesh Navlakhe 1 Jan 1, 2022
FastAPI Server Session is a dependency-based extension for FastAPI that adds support for server-sided session management

FastAPI Server-sided Session FastAPI Server Session is a dependency-based extension for FastAPI that adds support for server-sided session management.

DevGuyAhnaf 5 Dec 23, 2022
fastapi-admin2 is an upgraded fastapi-admin, that supports ORM dialects, true Dependency Injection and extendability

FastAPI2 Admin Introduction fastapi-admin2 is an upgraded fastapi-admin, that supports ORM dialects, true Dependency Injection and extendability. Now

Glib 14 Dec 5, 2022
Code Specialist 27 Oct 16, 2022
Fastapi-ml-template - Fastapi ml template with python

FastAPI ML Template Run Web API Local $ sh run.sh # poetry run uvicorn app.mai

Yuki Okuda 29 Nov 20, 2022
FastAPI-Amis-Admin is a high-performance, efficient and easily extensible FastAPI admin framework. Inspired by django-admin, and has as many powerful functions as django-admin.

简体中文 | English 项目介绍 FastAPI-Amis-Admin fastapi-amis-admin是一个拥有高性能,高效率,易拓展的fastapi管理后台框架. 启发自Django-Admin,并且拥有不逊色于Django-Admin的强大功能. 源码 · 在线演示 · 文档 · 文

AmisAdmin 318 Dec 31, 2022
This code generator creates FastAPI app from an openapi file.

fastapi-code-generator This code generator creates FastAPI app from an openapi file. This project is an experimental phase. fastapi-code-generator use

Koudai Aono 632 Jan 5, 2023
FastAPI Skeleton App to serve machine learning models production-ready.

FastAPI Model Server Skeleton Serving machine learning models production-ready, fast, easy and secure powered by the great FastAPI by Sebastián Ramíre

null 268 Jan 1, 2023
A minimal Streamlit app showing how to launch and stop a FastAPI process on demand

Simple Streamlit + FastAPI Integration A minimal Streamlit app showing how to launch and stop a FastAPI process on demand. The FastAPI /run route simu

Arvindra 18 Jan 2, 2023
Learn to deploy a FastAPI application into production DigitalOcean App Platform

Learn to deploy a FastAPI application into production DigitalOcean App Platform. This is a microservice for our Try Django 3.2 project. The goal is to extract any and all text from images using a technique called OCR.

Coding For Entrepreneurs 59 Nov 29, 2022
Github timeline htmx based web app rewritten from Common Lisp to Python FastAPI

python-fastapi-github-timeline Rewrite of Common Lisp htmx app _cl-github-timeline into Python using FastAPI. This project tries to prove, that with h

Jan Vlčinský 4 Mar 25, 2022
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models

Adam Watkins 950 Jan 8, 2023
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models Documentation: https://fast

Adam Watkins 943 Jan 1, 2023