Python client for InfluxDB

Overview

InfluxDB-Python

https://travis-ci.org/influxdata/influxdb-python.svg?branch=master Documentation Status Coverage PyPI Status

InfluxDB-Python is a client for interacting with InfluxDB.

Development of this library is maintained by:

Github ID URL
@aviau (https://github.com/aviau)
@xginn8 (https://github.com/xginn8)
@sebito91 (https://github.com/sebito91)

InfluxDB is an open-source distributed time series database, find more about InfluxDB at https://docs.influxdata.com/influxdb/latest

InfluxDB pre v1.1.0 users

This module is tested with InfluxDB versions: v1.2.4, v1.3.9, v1.4.3, v1.5.4, v1.6.4, and 1.7.4.

Those users still on InfluxDB v0.8.x users may still use the legacy client by importing from influxdb.influxdb08 import InfluxDBClient.

Installation

Install, upgrade and uninstall influxdb-python with these commands:

$ pip install influxdb
$ pip install --upgrade influxdb
$ pip uninstall influxdb

On Debian/Ubuntu, you can install it with this command:

$ sudo apt-get install python-influxdb

Dependencies

The influxdb-python distribution is supported and tested on Python 2.7, 3.5, 3.6, 3.7, PyPy and PyPy3.

Note: Python <3.5 are currently untested. See .travis.yml.

Main dependency is:

Additional dependencies are:

Documentation

Documentation is available at https://influxdb-python.readthedocs.io/en/latest/.

You will need Sphinx installed to generate the documentation.

The documentation can be generated by running:

$ tox -e docs

Generated documentation can be found in the docs/build/html/ directory.

Examples

Here's a basic example (for more see the examples directory):

$ python

>>> from influxdb import InfluxDBClient

>>> json_body = [
    {
        "measurement": "cpu_load_short",
        "tags": {
            "host": "server01",
            "region": "us-west"
        },
        "time": "2009-11-10T23:00:00Z",
        "fields": {
            "value": 0.64
        }
    }
]

>>> client = InfluxDBClient('localhost', 8086, 'root', 'root', 'example')

>>> client.create_database('example')

>>> client.write_points(json_body)

>>> result = client.query('select value from cpu_load_short;')

>>> print("Result: {0}".format(result))

Testing

Make sure you have tox by running the following:

$ pip install tox

To test influxdb-python with multiple version of Python, you can use Tox:

$ tox

Support

For issues with, questions about, or feedback for InfluxDB, please look into our community page: http://influxdb.com/community/.

We are also lurking on the following:

  • #influxdb on irc.freenode.net
  • #influxdb on gophers.slack.com

Development

All development is done on Github. Use Issues to report problems or submit contributions.

Please note that we WILL get to your questions/issues/concerns as quickly as possible. We maintain many software repositories and sometimes things may get pushed to the backburner. Please don't take offense, we will do our best to reply as soon as possible!

Source code

The source code is currently available on Github: https://github.com/influxdata/influxdb-python

TODO

The TODO/Roadmap can be found in Github bug tracker: https://github.com/influxdata/influxdb-python/issues

Comments
  • Adds a helper for adding data points

    Adds a helper for adding data points

    Implements #89 . Subclassing this helper eases writing data points in bulk. All data points are immutable, insuring they do not get overwritten. Each subclass can write to its own database. The time series names can also be based on one or more defined fields.

    Annotated example:

        class MySeriesHelper(SeriesHelper):
            class Meta:
                # Meta class stores time series helper configuration.
                client = TestSeriesHelper.client
                # The client should be an instance of InfluxDBClient.
                series_name = 'events.stats.{server_name}'
                # The series name must be a string. Add dependent field names in curly brackets.
                fields = ['time', 'server_name']
                # Defines all the fields in this time series.
                bulk_size = 5
                # Defines the number of data points to store prior to writing on the wire.
    
        # The following will create *five* (immutable) data points.
        # Since bulk_size is set to 5, upon the fifth construction call, *all* data
        # points will be written on the wire via MySeriesHelper.Meta.client.
        MySeriesHelper(server_name='us.east-1', time=159)
        MySeriesHelper(server_name='us.east-1', time=158)
        MySeriesHelper(server_name='us.east-1', time=157)
        MySeriesHelper(server_name='us.east-1', time=156)
        MySeriesHelper(server_name='us.east-1', time=155)
    
        # To manually submit data points which are not yet written, call commit:
        MySeriesHelper.commit()
    
        # To inspect the JSON which will be written, call _json_body_():
        MySeriesHelper._json_body_()
    
    opened by ChristopherRabotin 37
  • TODO

    TODO

    TODO LIST

    • [ ] Review the docs
    • [ ] Update and test all the tutorials
    • [ ] Fix the DataFrameClient and improve coverage (see #108 )
    • [ ] Implement more operations (user management, etc.)

    DONE

    • [x] Debug UDP support (I couldn't get it working)
    • [x] Fix the SeriesHelper
    • [x] Support writing in batch (see #102)
    opened by aviau 23
  • Add support for messagepack

    Add support for messagepack

    Description

    This pull request implements fetching data from InfluxDB using msgpack serialization instead of JSON. This fixes issues related to serialization ambiguities in JSON, and makes database queries faster. It does not break compatibility with older influxdb versions that do not support msgpack.

    Related issues

    Closes #733 Fixes #665 Fixes #715
    Fixes #625

    Performance

    Here is a performance comparison before (JSON) and after (msgpack) this pull request, for a simple SELECT * request on a measurement with 2 tags and three fields of types: integer, float and string:

    performance comparison

    Using msgpack gives a ~25% speedup.

    opened by lovasoa 19
  • Properly use chunk_size param

    Properly use chunk_size param

    chunk_size=0 was already a kwarg param for client.query(), but it was not actually passed through to response.iter_lines.

    This PR properly passes the parameter down, defaulting to requests.models.ITER_CHUNK_SIZE.

    pending contributor 
    opened by anthonyserious 17
  • Add a test module against a real running server

    Add a test module against a real running server

    This PR brings basically 2 things:

    • Add a "client_test_with_server.py" test module, that will upon execution, launch a fresh influxdb server instance in a temporary place.
    • Correct few detected bad things in the client code itself.

    any comments ?

    client_test_with_server.py could now easily be extended with others (more complex) test cases..

    opened by gst 17
  • Read and write pandas DataFrame

    Read and write pandas DataFrame

    pandas support -- is that something you want to add?

    I didn't manage to make tox work (have never been working with it), so I let the tests run manually on 2.7 and 3.4.

    opened by timtroendle 16
  • Pandas 0.24.0 TypeError TzInfo

    Pandas 0.24.0 TypeError TzInfo

    I just upgraded from pandas 0.23.4 to pandas 0.24.0.

    With pandas 0.24.0 and influxdb 5.2.1:

    from influxdb import DataFrameClient
    
    client = DataFrameClient(host='localhost', database='any_db')
    client.query('select * from table_1 limit 2;')
    

    leads to TypeError: Already tz-aware, use tz_convert to convert.

    The same query directly on the database returns:

    name: table_1
    time                value              sensor
    ----                -----              ------
    1493103600000000000 168.4412670512293  CTA201___DP04____MesF_PV
    1493103600000000000 8.964873657408459  CTA201___TE05____MesF_PV
    

    Note that it works great with pandas 0.23.4 for me despite #671

    opened by theophilechevalier 14
  • added support for python 2.6.6

    added support for python 2.6.6

    fixing flake8/pep8 issues

    fixing flake8/pep8 issues again

    travis for Python 2.6

    fixing AttributeError on python 2.6.9

    pep8

    pep8

    travis checks for 2.6.x

    unittest for travis checks for 2.6.x

    passing travis tests 2.7 - forward

    implemented changes suggested by @xginn8

    copied requirements and init.py from upstream

    Added back the AttributeError exception

    Added back the AttributeError exception and included datetime

    removed comment

    wontfix pending contributor 
    opened by eorochena 14
  • data frame with tag columns

    data frame with tag columns

    I would like to insert the following pandas data frame into influx.

    | [index] | tag1 | tag2 | field1 | field2 | | --- | --- | --- | --- | --- | | 2016-01-01 | "MSFT" | "NYSE" | 1.1 | 2.2 | | 2016-01-01 | "AAPL" | "NYSE" | 3.3 | 4.4 |

    [index] is the index of the data frame. tag1 and tag2 are influx tags. field1 and field2 are influx fields.

    How could I do this by using DataFrameClient? Thanks.

    opened by mingzhou 14
  • Add support for InfluxDB line protocol write API

    Add support for InfluxDB line protocol write API

    InfluxDB recently introduced a new line protocol write API, for more efficient bulk entry of points:

    https://github.com/influxdb/influxdb/pull/2696

    It would be awesome if support for this could be added in the Python client!

    opened by victorhooi 14
  • Add Example for sending information to DB via UDP

    Add Example for sending information to DB via UDP

    Due to lack of documentation for UDP, this example provides basic usage of sending information points via UDP. The code structure followed is similar, if not same as other examples in the examples directory.

    Locally tested with: InfluxDB v1.6.1 on Windows10 and influxdb-python v5.0.0 and should not break anything

    Things To do

    • [x] Update Documentation

    This PR addresses #646 #647

    Signed-off-by: Shantanoo [email protected]

    pending contributor 
    opened by shantanoo-desai 13
Owner
InfluxData
InfluxData
Pure Python MySQL Client

PyMySQL Table of Contents Requirements Installation Documentation Example Resources License This package contains a pure-Python MySQL client library,

PyMySQL 7.2k Jan 9, 2023
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 8, 2023
Redis Python Client

redis-py The Python interface to the Redis key-value store. Python 2 Compatibility Note redis-py 3.5.x will be the last version of redis-py that suppo

Andy McCurdy 11k Dec 29, 2022
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.8k Dec 31, 2022
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 554 Dec 4, 2022
Google Cloud Client Library for Python

Google Cloud Python Client Python idiomatic clients for Google Cloud Platform services. Stability levels The development status classifier on PyPI ind

Google APIs 4.1k Jan 1, 2023
Official Python low-level client for Elasticsearch

Python Elasticsearch Client Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Py

elastic 3.8k Jan 1, 2023
python-bigquery Apache-2python-bigquery (🥈34 · ⭐ 3.5K · 📈) - Google BigQuery API client library. Apache-2

Python Client for Google BigQuery Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google

Google APIs 550 Jan 1, 2023
High level Python client for Elasticsearch

Elasticsearch DSL Elasticsearch DSL is a high-level library whose aim is to help with writing and running queries against Elasticsearch. It is built o

elastic 3.6k Jan 3, 2023
Confluent's Kafka Python Client

Confluent's Python Client for Apache KafkaTM confluent-kafka-python provides a high-level Producer, Consumer and AdminClient compatible with all Apach

Confluent Inc. 3.1k Jan 5, 2023
Python cluster client for the official redis cluster. Redis 3.0+.

redis-py-cluster This client provides a client for redis cluster that was added in redis 3.0. This project is a port of redis-rb-cluster by antirez, w

Grokzen 1.1k Jan 5, 2023
MinIO Client SDK for Python

MinIO Python SDK for Amazon S3 Compatible Cloud Storage MinIO Python SDK is Simple Storage Service (aka S3) client to perform bucket and object operat

High Performance, Kubernetes Native Object Storage 582 Dec 28, 2022
Pysolr — Python Solr client

pysolr pysolr is a lightweight Python client for Apache Solr. It provides an interface that queries the server and returns results based on the query.

Haystack Search 626 Dec 1, 2022
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 4, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 2, 2023
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status ?? Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 66 Dec 2, 2022
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 5, 2022
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 3, 2022
Asynchronous, fast, pythonic DynamoDB Client

AsyncIO DynamoDB Asynchronous pythonic DynamoDB client; 2x faster than aiobotocore/boto3/botocore. Quick start With httpx Install this library pip ins

HENNGE 48 Dec 18, 2022