Asynchronous Python client for InfluxDB

Overview

aioinflux

CI status Coverage PyPI package Supported Python versions Documentation status

Asynchronous Python client for InfluxDB. Built on top of aiohttp and asyncio. Aioinflux is an alternative to the official InfluxDB Python client.

Aioinflux supports interacting with InfluxDB in a non-blocking way by using aiohttp. It also supports writing and querying of Pandas dataframes, among other handy functionality.

Please refer to the documentation for more details.

Installation

Python 3.6+ is required. You also need to have access to a running instance of InfluxDB.

pip install aioinflux

Quick start

This sums most of what you can do with aioinflux:

import asyncio
from aioinflux import InfluxDBClient

point = {
    'time': '2009-11-10T23:00:00Z',
    'measurement': 'cpu_load_short',
    'tags': {'host': 'server01',
             'region': 'us-west'},
    'fields': {'value': 0.64}
}

async def main():
    async with InfluxDBClient(db='testdb') as client:
       await client.create_database(db='testdb')
       await client.write(point)
       resp = await client.query('SELECT value FROM cpu_load_short')
       print(resp)


asyncio.get_event_loop().run_until_complete(main())

See the documentation for more detailed usage.

Comments
  • Optional pandas/numpy dependencies

    Optional pandas/numpy dependencies

    Issue:

    Pandas/NumPy are not required for Influx interaction, but are dependencies for aioinflux.

    When developing for a Raspberry target, this becomes an issue, as Pandas/NumPy do not provide compiled packages for ARMv7. Compiling these packages on a Raspberry 3 takes +/- 1 hour. That's a bit much for an unused dependency.

    Desired behavior:

    No functional changes for clients that use dataframe serialization functionality. No functional changes for clients that don't - but they can drop the Pandas/Numpy packages from their dependency stack.

    Proposed solution:

    Pandas/NumPy dependencies in setup.py can move to an extras_require collection.

    client.py does not use NumPy, and only uses Pandas to define PointType.

    • The Pandas import can be contained inside a try/except block.
    • The PointType definition can equally easily be made conditional.

    serialization.py makes more extensive use of dependencies.

    • make_df() and parse_df() are Pandas-only functions, and can move to a conditional include.
    • The isinstance(data, pd.DataFrame) check can also be made conditional (if using_pd and isinstance(data, pd.DataFrame).
    • Same for type checks in _parse_fields(): the checks for np.integer and np.isnan() can be placed behind evaluation of a using_np variable, or pd is None check.

    Required effort:

    Between 2 hours and 2 days.

    Practical consideration:

    We're actively using aioinflux (apart from the compile time issues it works great), and I can make the time to make a PR. Bigger issue is whether this is a desired feature for the main repository. If not, I can fork and implement it downstream.

    opened by steersbob 6
  • `path` support in constructor

    `path` support in constructor

    Hi, thanks again for this great package, been very helpful.

    Sync InfluxDB client has in its constructor a path parameter:

    https://influxdb-python.readthedocs.io/en/latest/api-documentation.html#influxdb.InfluxDBClient

    path (str) – path of InfluxDB on the server to connect, defaults to ‘’
    

    And the URL is built as follows:

    https://github.com/influxdata/influxdb-python/blob/d5d12499f3755199d5eedd8b363450f1cf4073bd/influxdb/client.py#L123

            self.__baseurl = "{0}://{1}:{2}{3}".format(
                self._scheme,
                self._host,
                self._port,
                self._path)
    

    Although in aioinflux there is no path parameter, and the URL is built as follows:

    https://github.com/gusutabopb/aioinflux/blob/master/aioinflux/client.py#L163

            return f'{"https" if self.ssl else "http"}://{self.host}:{self.port}/{{endpoint}}'
    

    So, it seems that I cannot connect with aioinflux to our Influx deployment, as for reasons unknown to me, it is under a path.

    Currently, I created a quick monkey patch as follows:

    class MonkeyPatchedInfluxDBClient(InfluxDBClient):
        def __init__(self, *args, path='/', **kwargs):
            super().__init__(*args, **kwargs)
            self._path = path
    
        @property
        def path(self):
            return self._path
    
        @property
        def url(self):
            return '{protocol}://{host}:{port}{path}{{endpoint}}'.format(
                protocol='https' if self.ssl else 'http',
                host=self.host,
                port=self.port,
                path=self.path,
            )
    

    Thanks for placing the url in a property, that was useful.

    enhancement 
    opened by carlos-jenkins 4
  • GROUP BY with Dataframe output

    GROUP BY with Dataframe output

    When the query has group by clause other than time, for example

    SELECT COUNT(*) FROM "db"."rp"."measurement" WHERE time > now() - 7d GROUP BY "category"
    

    The dataframe output mode returns a dictionary instead of dataframe. The key seems to be a string with "measurement_name, category=A", "measurement_name, category=B",... and values of the dictionary are dataframes. Is this expected?

    docs 
    opened by allisonwang 4
  • issue for query

    issue for query

    Hi, Gustavo,

    I tried to follow the demo on our project which doesnt work, could you help me to figure out the reason?

    here is my code

    async def read_influxdb(userid, starttime, endtime):
        #logger = logging.getLogger("influxDB read demo")
        async with InfluxDBClient(host=localhost, port=8086, username='admin', password='123456',db=db_name) as client:
            user_id = '\'' + str(userid) + '\''
            sql_ecg = 'SELECT point FROM wave WHERE (person_zid = {}) AND (time > {}s) AND (time < {}s)'.format(user_id, starttime, endtime)
            await client.query(sql_ecg, chunked=True)
    
    if __name__ ==  '__main__':
        user_id = 973097
        starttime = '2018-09-26 18:08:48'
        endtime = '2018-09-27 18:08:48'
        starttime_posix = utc_to_local(starttime)
        endtime_posix = utc_to_local(endtime)
        asyncio.get_event_loop().run_until_complete(read_influxdb(user_id, starttime_posix, endtime_posix))
    

    We I run this code, I get the errors below:

    sys:1: RuntimeWarning: coroutine 'query' was never awaited
    Unclosed client session
    client_session: <aiohttp.client.ClientSession object at 0x10f78f630>
    

    Best

    question 
    opened by peiyaoli 3
  • Remove caching functionality

    Remove caching functionality

    Aioinflux used to provide a built-in caching local functionality using Redis. However, due to low perceived usage, vendor lock-in (Redis) and extra complexity added to Aioinflux, I have decided to remove it.

    Hopefully no one else besides my past self use this functionality. In case someone else did, or in case someone else didn't but may be interested in caching InfluxDB query results, I will add a simple implementation of a simple caching layer using pickle. If this affects you please let me know by commenting below.

    opened by gusutabopb 2
  • PEP 563 breaks user-defined class schema validation

    PEP 563 breaks user-defined class schema validation

    Background

    PEP 563 behavior is available from Python 3.7 (using from __future__ import annotations) and will become the default in Python 3.10.

    Description of the problem

    Among changes introduced by PEP 563, the type annotations in __annotations__ attribute of an object are stored in string form. This breaks in the function below because all the tests expect type objects. https://github.com/gusutabopb/aioinflux/blob/77f9d24f493365356298a1eb904a27ce046cec27/aioinflux/serialization/usertype.py#L57-L67

    Reproduction

    • Define a user-defined class, decorated with lineprotocol():
    from typing import NamedTuple
    
    import aioinflux
    
    
    @aioinflux.lineprotocol
    class Production(NamedTuple):
        total_line: aioinflux.INT
    
    # Works well as is
    
    • Add from __future__ import annotations at the top and you get: SchemaError: Must have one or more non-empty field-type attributes [~BOOL, ~INT, ~DECIMAL, ~FLOAT, ~STR, ~ENUM] at import time.

    Possible solution

    Using https://docs.python.org/3/library/typing.html#typing.get_type_hints has the same behavior (returns a dict with values as type objects) with or without from __future__ import annotations. Furthermore, the autor of PEP 563 advises to use it.

    opened by cailloumajor 2
  • iterpoints only return the first group where processing GROUP-BY queries

    iterpoints only return the first group where processing GROUP-BY queries

    Hi

    During processing this query:

    SELECT ROUND(LAST(Free_Megabytes) / 1024) AS free, ROUND(Free_Megabytes / 1024 / (Percent_Free_Space / 100)) AS total, ROUND(Free_Megabytes / 1024 * ((100 - Percent_Free_Space) / Percent_Free_Space)) AS used, (100 - Percent_Free_Space) as percent, instance as path FROM win_disk WHERE host = 'ais-pc-16003' GROUP BY instance
    

    This is the raw data that InfluxDBClient.query returned.

    {'results': [{'series': [{'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'C:'},
                              'values': [[1577419571000000000,
                                          94,
                                          238,
                                          144,
                                          60.49140930175781,
                                          'C:']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'D:'},
                              'values': [[1577419571000000000,
                                          1727,
                                          1863,
                                          136,
                                          7.3103790283203125,
                                          'D:']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': 'HarddiskVolume1'},
                              'values': [[1577419330000000000,
                                          0,
                                          0,
                                          0,
                                          29.292930603027344,
                                          'HarddiskVolume1']]},
                             {'columns': ['time',
                                          'free',
                                          'total',
                                          'used',
                                          'percent',
                                          'path'],
                              'name': 'win_disk',
                              'tags': {'instance': '_Total'},
                              'values': [[1577419571000000000,
                                          1821,
                                          2101,
                                          280,
                                          13.345237731933594,
                                          '_Total']]}],
                  'statement_id': 0}]}
    

    And I want to use this code to get parsed dicts:

    def dict_parser(*x, meta):
        return dict(zip(meta['columns'], x))
    
    g = fixed_iterpoints(r, dict_parser)
    

    But only got the first row ("instance": "C:"). And below is the source of iterpoints. As you can see, the for-loop returned at the first iteration.

    def iterpoints(resp: dict, parser: Optional[Callable] = None) -> Iterator[Any]:
        for statement in resp['results']:
            if 'series' not in statement:
                continue
            for series in statement['series']:
                if parser is None:
                    return (x for x in series['values'])
                elif 'meta' in inspect.signature(parser).parameters:
                    meta = {k: series[k] for k in series if k != 'values'}
                    meta['statement_id'] = statement['statement_id']
                    return (parser(*x, meta=meta) for x in series['values'])
                else:
                    return (parser(*x) for x in series['values'])
        return iter([])
    

    I modified this function as a workaround:

    def fixed_iterpoints(resp: dict, parser: Optional[Callable] = None):
        for statement in resp['results']:
            if 'series' not in statement:
                continue
    
            gs = []
            for series in statement['series']:
                if parser is None:
                    part = (x for x in series['values'])
                elif 'meta' in inspect.signature(parser).parameters:
                    meta = {k: series[k] for k in series if k != 'values'}
                    meta['statement_id'] = statement['statement_id']
                    part = (parser(x, meta=meta) for x in series['values'])
                else:
                    part = (parser(x) for x in series['values'])
    
                if len(statement['series']) == 1:
                    return part
    
                gs.append(part)
    
            return gs
        return iter([])
    
    

    It worked for me. But it returned nested generator which might be wierd. I want to know if you have a better idea.

    opened by Karmenzind 2
  • Properly escape extra_tags in user type

    Properly escape extra_tags in user type

    When adding extra_tags to a user defined object the extra tags are not properly escaped to line protocol. This PR ensures the tag value is escaped, reusing the existing tag_escape implementation. I did not alter the tag name but I'd be fine adding that in as well if requested.

    opened by iwoloschin 2
  • Chunked reponse to DataFrame

    Chunked reponse to DataFrame

    First of, thank you! Great repo with excellent documentation. I use it with a Starlette project I am working on.

    In the project I've implemented a simple way to parse a pandas.Dataframe from a chuncked response. It works, and I added it to my fork and I am wondering if you would welcome such a feature.

    Here is the MVP implementation in my fork

    I'll clean the code, remove exceptions, move it to serialization/dataframe.py and add tests if you're OK with it.

    enhancement 
    opened by dasdachs 2
  • Jupyter and Python 3.7 compatibility

    Jupyter and Python 3.7 compatibility

    Currently, the blocking mode won't work on Python 3.7 running on Jupyter. The code below:

    import aioinflux
    c = aioinflux.InfluxDBClient(db='mydb', mode='blocking')
    c.show_measurements()
    

    Raises RuntimeError: This event loop is already running

    This is caused by the fact that the latest versions of Tornado (which is used by Jupyter/ipykernel) runs an asyncio loop on the main thread by default:

    # Python 3.7
    import asyncio
    asyncio.get_event_loop()
    # <_UnixSelectorEventLoop running=True closed=False debug=False>
    
    # Python 3.6 (w/ tornado < 5)
    import asyncio
    asyncio.get_event_loop()
    # <_UnixSelectorEventLoop running=False closed=False debug=False>
    

    This is being discussed on https://github.com/jupyter/notebook/issues/3397

    From an aioinflux perspective, a possible work around would be to start a new event loop on a background thread and use asyncio.run_coroutine_threadsafe to run the coroutine and return a concurrent.futures.Future object that wraps the result.

    opened by gusutabopb 2
  • UDP inserts?

    UDP inserts?

    Thanks for creating this library? Does it support UDP inserts via asyncio?

    i.e. udp_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

    udp_port_tuple = (host, udp_port) udp_socket.sendto(data_str, udp_port_tuple)

    opened by vgoklani 2
  • Do not print warning if I don't want to use pandas

    Do not print warning if I don't want to use pandas

    I have no intention of using pandas and don't want to see the following warning when my application starts or worry about suppressing a warning:

    "Pandas/Numpy is not available. Support for 'dataframe' mode is disabled."

    Please consider either removing this warning on import.

    opened by jsonbrooks 1
  • Serialization of pd.NA

    Serialization of pd.NA

    When trying to write an integer64 field, I was getting an error due to the presence of missing values. The missing values were in the form of pd.NA, rather than np.nan and they were not being excluded in the serialization.

    I made an attempt to fix this and it worked, though might not be the most elegant solution. In the _replace function, I added a new replacement tuple to the list of replacements, very similar to the one that handles the nans:

    def _replace(df):
        obj_cols = {k for k, v in dict(df.dtypes).items() if v is np.dtype('O')}
        other_cols = set(df.columns) - obj_cols
        obj_nans = (f'{k}="nan"' for k in obj_cols)
        other_nans = (f'{k}=nani?' for k in other_cols)
        obj_nas = (f'{k}="<NA>"' for k in obj_cols)
        other_nas = (f'{k}=<NA>i?' for k in other_cols)
        replacements = [
            ('|'.join(chain(obj_nans, other_nans)), ''),
            ('|'.join(chain(obj_nas, other_nas)), ''),
            (',{2,}', ','),
            ('|'.join([', ,', ', ', ' ,']), ' '),
        ]
        return replacements
    

    Hope this ends up helping someone

    opened by goncas23 0
  • LICENSE missing in pypi

    LICENSE missing in pypi

    Hi, If you find the time for some maintenance could you include the LICENSE file in the next pypi release? This simplifies integration of the package through yocto/bitbake into embedded linux applications. Best regards

    opened by HerrMuellerluedenscheid 0
  • serialisation.mapping - bugfix datetime objects

    serialisation.mapping - bugfix datetime objects

    datetime objects were handled incorrectly. This resulted in a time offset from UTC.

    This correct implementation assumes UTC time, if no tzinfo object is attached to the datetime. Further the offset is now taken from the tzinfo object.

    opened by miili 0
  • Still maintained?

    Still maintained?

    Just wondering if this library is still actively maintained since it hasn't had a commit or merged PR since last summer. No judgment, just wondering since I like the idea of this client vs influx-python.

    opened by benlachman 3
Releases(v0.9.0)
  • v0.9.0(Jul 11, 2019)

    Added

    • Add support for custom path to InfluxDB (#24)
    • Add support for Decimal serialization (812c1a8, 100d931)
    • Add chunk count on chunked response debugging message (b9e85ad)

    Changed

    • Refactor rm_none option implementation (5735b51, 13062ed, 89bae37)
    • Make enum typevars more strict (f177212)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(May 10, 2019)

  • v0.7.1(Apr 11, 2019)

    This is version is backwards compatible with v0.7.0

    Fixed

    • Don't cache error responses (be7b87c)

    Docs

    • Minor wording changes

    Internal

    • Minor internal changes
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 11, 2019)

    This is version is mostly backwards compatible with v0.6.x (w/ the exception of query patterns functionality)

    Added

    • Redis-based caching functionality. See the docs for details.
    • Timeout functionality (#21 by @SuminAndrew)

    Changed

    • Move ClientSession creation logic outside __init__. It is now easier to used advanced aiohttp.ClientSession options. See the docs for details.

    Removed

    • Query patterns functionality

    Internal

    • Refactor test suite
    • Various other internal changes
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Feb 1, 2019)

    This is version is backwards compatible with v0.6.0

    Fixed

    • Type annotation error in Python 3.6 (febfe47)
    • Suppress The object should be created from async function warning from aiohttp 3.5 (da950e9)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Feb 1, 2019)

    Added

    • Support serializing NaN integers in pandas 0.24+ (See blog post) (1c55217)
    • Support for using namedtuple with iterpoints (be93c53)

    Changed

    • [BREAKING] Changed signature of parser argument of iterpoints from (x, meta) to (*x, meta) (bd93c53)

    Removed

    • [BREAKING] Removed iterable mode and InfluxDBResult / InfluxDBChunkedResult. Use iterpoints instead. (592c5ed)
    • Deprecated set_query_pattern (1d36b07)

    Docs

    • Various improvements (8c6cbd3, ce46596, b7db169, ba3edae)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Jan 24, 2019)

    This is version is backwards compatible with v0.5.0

    Fixed

    • Fix type annotations
    • Fix internal API inconsistencies

    Docs

    • Complete API section
    • Add proper Sphinx links
    • Update/fix various sections
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Jan 24, 2019)

    Changed

    • [BREAKING] Removed DataPoint functionality in favor of simpler and more flexible @lineprotocol decorator. See the docs for details.

    Docs

    • Added detailed @lineprotocol usage
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 22, 2018)

    Fixed

    • Fixed bug when doing multi-statement queries when using dataframe mode

    Docs

    • Added note regarding handling of multi-statement/multi-series queries when using dataframe mode
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 22, 2018)

    Added

    • Added ability to write datapoint objects. See the docs for details.
    • Added bytes output format. This is to facilitate the addition of a caching layer on top of InfluxDB. (cb4e3d1)

    Changed

    • Change write method signature to match the /write endpoint docs
      • Allow writing to non-default retention policy (#14)
      • (precision is not fully implemented yet)
    • Renamed raw output format to json. Most users should be unaffected by this. (cb4e3d1)

    Fixed

    • Improved docs

    Internal

    • Refactored serialization/parsing functionality into a subpackage
    • Fix test warnings (2e42d50)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Sep 3, 2018)

    • Fixed output='dataframe' parsing bug (#15)
    • Removed tag column -> categorical dtype conversion functionality
    • Moved documentation to Read The Docs
    • Added two query patterns (671013b)
    • Added this CHANGELOG
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jul 23, 2018)

  • v0.3.2(May 3, 2018)

    • Fix parsing bug for string ending in a backslash (db8846ec6037752fe4fff8d88aa8fa989bc69452)
    • Add InfluxDBWriteError exception class (d8d0a0181f3e05b6e754cd309015b73a4a0b1fb9)
    • Make InfluxDBClient.db attribute optional (039e0886f3b2469bc2d2edd8b3da34b08b31b1db)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Apr 29, 2018)

    • Fix bug where timezone-unaware datetime input was assumed to be in local time (#11 / a8c81b788a16030a70c8f2a07ebc36b34924f8d5)
    • Minor improvement in dataframe parsing (1e33b92)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 24, 2018)

    Highlights:

    • Drop Pandas/Numpy requirement (#9)
    • Improved iteration support (816a722)
    • Implement tag/key value caching (9a65787)
    • Improve dataframe serialization
      • Speed improvements (ddc9ecc)
      • Memory usage improvements (a2b58bd)
      • Disable concatenating of dataframes of the same measurement when grouping by tag (331a0c9)
      • Queries now return tag columns with pd.Categorical dtype (efdea98)
      • Writes now automatically identify pd.Categorical dtype columns as tag columns (ddc9ecc)

    API changes:

    • mode attribute was "split" into mode and output. Default behavior remains the same (async / raw).
    • Iteration is now made easier through the iterable mode and InfluxDBResult and InfluxDBChunkedResult classes
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 6, 2018)

    Highlights:

    • Documentation is now complete
    • Improved iteration support (via iter_resp) (cfffbf5)
    • Allow users to add custom query patterns
    • Add support for positional arguments in query patterns
    • Reimplement __del__ (40d0a69 / #7)
    • Improve/debug dataframe parsing (7beeb53 / 96d78a4)
    • Improve write error message (7972946) (by @miracle2k)

    API changes:

    • Rename AsyncInfluxDBClient to InfluxDBClient (54d98c9)
    • Change return format of chunked responses (related: cfffbf5 / #6)
    • Make some __init__ arguments keyword-only (5d2edf6)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Feb 28, 2018)

    Bug fix release. Highlights:

    • Add __aenter__/__aexit__ support (5736446) (by @Kargathia)
    • Add HTTPS URL support (49b8e89) (by @miracle2k)
    • Add Unix socket support (8a8b069) (by @carlos-jenkins)
    • Fix bug where tags where not being added to DataFrames when querying (a9f1d82)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 10, 2017)

    First bug fix release. Highlights:

    • Add error handling for chunked responses (db93c2034d9f100f13cf08d4c96e88587f2dd9f1)
    • Fix DataFrame tag parsing bug (aa02faa6808d9cef751974943cb36e8d0c18cbf6)
    • Fix boolean field parsing bug (4c2bff966c7c640c5182c39a0316a5b22c9977ea)
    • Increase test coverage
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 4, 2017)

Owner
Gustavo Bezerra
Gustavo Bezerra
Asynchronous, fast, pythonic DynamoDB Client

AsyncIO DynamoDB Asynchronous pythonic DynamoDB client; 2x faster than aiobotocore/boto3/botocore. Quick start With httpx Install this library pip ins

HENNGE 48 Dec 18, 2022
Asynchronous interface for peewee ORM powered by asyncio

peewee-async Asynchronous interface for peewee ORM powered by asyncio. Important notes Since version 0.6.0a only peewee 3.5+ is supported If you still

05Bit 666 Dec 30, 2022
Pure Python MySQL Client

PyMySQL Table of Contents Requirements Installation Documentation Example Resources License This package contains a pure-Python MySQL client library,

PyMySQL 7.2k Jan 9, 2023
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 8, 2023
Redis Python Client

redis-py The Python interface to the Redis key-value store. Python 2 Compatibility Note redis-py 3.5.x will be the last version of redis-py that suppo

Andy McCurdy 11k Dec 29, 2022
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.8k Dec 31, 2022
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 554 Dec 4, 2022
Google Cloud Client Library for Python

Google Cloud Python Client Python idiomatic clients for Google Cloud Platform services. Stability levels The development status classifier on PyPI ind

Google APIs 4.1k Jan 1, 2023
Official Python low-level client for Elasticsearch

Python Elasticsearch Client Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Py

elastic 3.8k Jan 1, 2023
python-bigquery Apache-2python-bigquery (🥈34 · ⭐ 3.5K · 📈) - Google BigQuery API client library. Apache-2

Python Client for Google BigQuery Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google

Google APIs 550 Jan 1, 2023
High level Python client for Elasticsearch

Elasticsearch DSL Elasticsearch DSL is a high-level library whose aim is to help with writing and running queries against Elasticsearch. It is built o

elastic 3.6k Jan 3, 2023
Confluent's Kafka Python Client

Confluent's Python Client for Apache KafkaTM confluent-kafka-python provides a high-level Producer, Consumer and AdminClient compatible with all Apach

Confluent Inc. 3.1k Jan 5, 2023
Python cluster client for the official redis cluster. Redis 3.0+.

redis-py-cluster This client provides a client for redis cluster that was added in redis 3.0. This project is a port of redis-rb-cluster by antirez, w

Grokzen 1.1k Jan 5, 2023
MinIO Client SDK for Python

MinIO Python SDK for Amazon S3 Compatible Cloud Storage MinIO Python SDK is Simple Storage Service (aka S3) client to perform bucket and object operat

High Performance, Kubernetes Native Object Storage 582 Dec 28, 2022
Pysolr — Python Solr client

pysolr pysolr is a lightweight Python client for Apache Solr. It provides an interface that queries the server and returns results based on the query.

Haystack Search 626 Dec 1, 2022
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 4, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 2, 2023
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status ?? Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 66 Dec 2, 2022
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 5, 2022