Official Python low-level client for Elasticsearch

Overview

Python Elasticsearch Client

Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Python; because of this it tries to be opinion-free and very extendable.

Installation

Install the elasticsearch package with pip:

$ python -m pip install elasticsearch

If your application uses async/await in Python you can install with the async extra:

$ python -m pip install elasticsearch[async]

Read more about how to use asyncio with this project.

Compatibility

The library is compatible with all Elasticsearch versions since 0.90.x but you have to use a matching major version:

For Elasticsearch 7.0 and later, use the major version 7 (7.x.y) of the library.

For Elasticsearch 6.0 and later, use the major version 6 (6.x.y) of the library.

For Elasticsearch 5.0 and later, use the major version 5 (5.x.y) of the library.

For Elasticsearch 2.0 and later, use the major version 2 (2.x.y) of the library, and so on.

The recommended way to set your requirements in your setup.py or requirements.txt is:

# Elasticsearch 7.x
elasticsearch>=7.0.0,<8.0.0

# Elasticsearch 6.x
elasticsearch>=6.0.0,<7.0.0

# Elasticsearch 5.x
elasticsearch>=5.0.0,<6.0.0

# Elasticsearch 2.x
elasticsearch>=2.0.0,<3.0.0

If you have a need to have multiple versions installed at the same time older versions are also released as elasticsearch2 and elasticsearch5.

Example use

Simple use-case:

>>> from datetime import datetime
>>> from elasticsearch import Elasticsearch

# by default we connect to localhost:9200
>>> es = Elasticsearch()

# create an index in elasticsearch, ignore status code 400 (index already exists)
>>> es.indices.create(index='my-index', ignore=400)
{'acknowledged': True, 'shards_acknowledged': True, 'index': 'my-index'}

# datetimes will be serialized
>>> es.index(index="my-index", id=42, body={"any": "data", "timestamp": datetime.now()})
{'_index': 'my-index',
 '_type': '_doc',
 '_id': '42',
 '_version': 1,
 'result': 'created',
 '_shards': {'total': 2, 'successful': 1, 'failed': 0},
 '_seq_no': 0,
 '_primary_term': 1}

# but not deserialized
>>> es.get(index="my-index", id=42)['_source']
{'any': 'data', 'timestamp': '2019-05-17T17:28:10.329598'}

Full documentation.

Elastic Cloud (and SSL) use-case:

>>> from elasticsearch import Elasticsearch
>>> es = Elasticsearch(cloud_id="<some_long_cloud_id>", http_auth=('elastic','yourpassword'))
>>> es.info()

Using SSL Context with a self-signed cert use-case:

>>> from elasticsearch import Elasticsearch
>>> from ssl import create_default_context

>>> context = create_default_context(cafile="path/to/cafile.pem")
>>> es = Elasticsearch("https://elasticsearch.url:port", ssl_context=context, http_auth=('elastic','yourpassword'))
>>> es.info()

Features

The client's features include:

  • translating basic Python data types to and from json (datetimes are not decoded for performance reasons)
  • configurable automatic discovery of cluster nodes
  • persistent connections
  • load balancing (with pluggable selection strategy) across all available nodes
  • failed connection penalization (time based - failed connections won't be retried until a timeout is reached)
  • support for ssl and http authentication
  • thread safety
  • pluggable architecture

Elasticsearch-DSL

For a more high level client library with more limited scope, have a look at elasticsearch-dsl - a more pythonic library sitting on top of elasticsearch-py.

elasticsearch-dsl provides a more convenient and idiomatic way to write and manipulate queries by mirroring the terminology and structure of Elasticsearch JSON DSL while exposing the whole range of the DSL from Python either directly using defined classes or a queryset-like expressions.

It also provides an optional persistence layer for working with documents as Python objects in an ORM-like fashion: defining mappings, retrieving and saving documents, wrapping the document data in user-defined classes.

License

Copyright 2020 Elasticsearch B.V

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Build Status

https://readthedocs.org/projects/elasticsearch-py/badge/?version=latest&style=flat https://clients-ci.elastic.co/job/elastic+elasticsearch-py+master/badge/icon
Comments
  • urllib3 > 1.10 breaks connection

    urllib3 > 1.10 breaks connection

    When using latest urllib3 (1.11 as of now), http connection breaks

    AttributeError: 'module' object has no attribute 'HTTPMessage' WARNING:elasticsearch:GET http:/server:443/es/index/_search [status:N/A request:13.243s] Traceback (most recent call last): File "/Library/Python/2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 74, in perform_request response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw) File "/Library/Python/2.7/site-packages/urllib3/connectionpool.py", line 557, in urlopen body=body, headers=headers) File "/Library/Python/2.7/site-packages/urllib3/connectionpool.py", line 388, in _make_request assert_header_parsing(httplib_response.msg) File "/Library/Python/2.7/site-packages/urllib3/util/response.py", line 49, in assert_header_parsing if not isinstance(headers, httplib.HTTPMessage): AttributeError: 'module' object has no attribute 'HTTPMessage'

    Reverting back to urllib3==1.10.4 fixes the problem

    setup.py specifies: install_requires = [ 'urllib3>=1.8, <2.0', ]

    Perhaps it should be changed to install_requires = [ 'urllib3>=1.8, <1.11', ]

    until this is fixed.

    opened by katrielt 28
  • Search Template Example

    Search Template Example

    Hi! Could someone put here and example of how to put and use a search template, please? I need one with mustache conditional but I can't make it work

    Thanks!

    opened by Garito 27
  • Proxy settings

    Proxy settings

    hi there, i dont know if i am right here, but i have not found anything according to my problem in the web. i have to use elasticsearch in python from behind a proxy server. how can i pass down the proxy setting to elasticsearch. i tried something like that without success.

    es = Elasticsearch([es_url], _proxy = "http://proxyurl:port", _proxy_headers = { 'basic_auth': 'USERNAME:PASSWORD' })
    res = es.search(index=index, body=request, search_type="count")
    

    any help would be very nice. thanks

    opened by svamet 27
  • TransportError(406, 'Content-Type header [] is not supported') - where to find requirements.txt

    TransportError(406, 'Content-Type header [] is not supported') - where to find requirements.txt

    Hello guys, could you please help me how to set that library to use master version or Python ES module? Where can I modify that requirements.txt file?

    (Im now trying to learn how to use ES with Python to create some visualisations in Kibana so Im trying to import some data from online StarWars API :) ) IM getting transport error 406. I have found this solution but I don't know where that requiremets.txt file is located.

    Thank you in advance

    opened by WakeDown-M 26
  • ssl verification fails despite verify_certs=false

    ssl verification fails despite verify_certs=false

    In elasticsearch version 6.6.1 and elasticsearch-dsl version 6.1.0, ssl verification seems to ignore the verify_certs option. When set to True, the cert is still verified and fails on self-signed certs.

    In version elasticsearch 5.5.1, and elasticsearch-dsl version 5.4.0, the verify_certs options works as expected.

    client = Elasticsearch( hosts=['localhost'], verify_certs=False, timeout=60 )

    elasticsearch.exceptions.SSLError: ConnectionError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)) caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777))

    opened by gnarlyman 26
  • Deprecation warnings in 7.15.0 pre-releases

    Deprecation warnings in 7.15.0 pre-releases

    If you're seeing this you likely received a deprecation warning from a 7.15.0 pre-release. Thanks for trying out in-development software

    The v8.0.0 roadmap includes a list of breaking changes that will be implemented to make the Python Elasticsearch client easier to use and more discoverable. To make the upgrade from the 7.x client to the 8.0.0 client as smooth as possible for developers we're deprecating options and usages that will be removed in either 8.0 or 9.0.

    This also means that you'll get an early preview for the great things to come in the 8.0.0 client starting in 7.x, which we're pretty excited about!

    Which APIs are effected?

    All APIs will emit deprecation warnings for positional argument use in 7.15.0.

    The following APIs will start emitting deprecation warnings regarding the body parameters. This list may change leading up to the 7.15.0 release.

    • search
    • index
    • create
    • update
    • scroll
    • clear_scroll
    • search_mvt
    • indices.create

    The following APIs will start emitting deprecation warnings regarding doc_type parameters.

    • nodes.hot_threads
    • license.post_start_trial

    What is being deprecated?

    Starting in 7.15.0 the following features will be deprecated and are scheduled for removal in 9.0.0:

    Positional arguments for APIs are deprecated

    Using keyword arguments has always been recommended when using the client but now starting in 7.15.0 using any positional arguments will emit a deprecation warning.

    # ✅ Supported usage:
    es.search(index="index", ...)
    
    # ❌ Deprecated usage:
    es.search("index", ...)
    

    The body parameter for APIs are deprecated

    For JSON requests each field within the top-level JSON object will become it's own parameter of the API with full type hinting

    # ✅ New usage:
    es.search(query={...})
    
    # ❌ Deprecated usage:
    es.search(body={"query": {...}})
    

    For non-JSON requests or requests where the JSON body is itself an addressable object (like a document) each API will have the parameter renamed to a more semantic name:

    # ✅ New usage:
    es.index(document={...})
    
    # ❌ Deprecated usage:
    es.index(body={...})
    

    The doc_type parameter for non-Index APIs

    Using doc_type for APIs that aren't related to indices or search is deprecated. Instead you should use the type parameter. See https://github.com/elastic/elasticsearch-py/pull/1713 for more context for this change.

    For APIs that are related to indices or search the doc_type parameter isn't deprecated client-side, however mapping types are deprecated in Elasticsearch and will be removed in 8.0.

    # ✅ New usage:
    es.nodes.hot_threads(type="cpu")
    
    # ❌ Deprecated usage:
    es.nodes.hot_threads(doc_type="cpu")
    
    opened by sethmlarson 24
  • helpers.scan: TypeError: search() got an unexpected keyword argument 'doc_type'

    helpers.scan: TypeError: search() got an unexpected keyword argument 'doc_type'

    I'm using helpers.scan function to retrieve data. I passed in doc_type = log to it (following the online resource here). But I got this error:

    <ipython-input-53-dffeaecb48f3> in export(self, outputFiles, host, indexDbName, docType, queryString, queryJson, size, fields, delimiter)
         41             w.writeheader()
         42             try:
    ---> 43                 for row in scanResponse:
         44                     for key,value in row['_source'].iteritems():
         45                         row['_source'][key] = unicode(value)
    
    
    C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\helpers\actions.py in scan(client, query, scroll, raise_on_error, preserve_order, size, request_timeout, clear_scroll, scroll_kwargs, **kwargs)
        431     # initial search
        432     resp = client.search(
    --> 433         body=query, scroll=scroll, size=size, request_timeout=request_timeout, **kwargs
        434     )
        435     scroll_id = resp.get("_scroll_id")
    
    C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\client\utils.py in _wrapped(*args, **kwargs)
         82                 if p in kwargs:
         83                     params[p] = kwargs.pop(p)
    ---> 84             return func(*args, params=params, **kwargs)
         85 
         86         return _wrapped
    
    

    TypeError: search() got an unexpected keyword argument 'doc_type'

    I use 'log' as the doctype and I'm using elasticsearch server 6.3.0 Did I set the doctype wrong? Thank you!

    opened by qilds123 19
  • fix python 3.x str.decode exception

    fix python 3.x str.decode exception

    Python 3.x doesn't support str.decode() causing code to fail with AttributeError exception. This code change allows the python module to continue to work with older 2.x python while being 3.x friendly as well. Any alternate suggestions?

    opened by mmarshallgh 19
  • urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9e5db5d208>: Failed to establish a new connection: [Errno 113] No route to host

    urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host

    I often have this error, but script works :

    GET http://x.x.x.x:9200/_nodes/_all/http [status:N/A request:2.992s] Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw) File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 79, in create_connection raise err File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 69, in create_connection sock.connect(sa) OSError: [Errno 113] No route to host

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/elasticsearch/connection/http_urllib3.py", line 172, in perform_request response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw) File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/lib/python3.6/site-packages/urllib3/util/retry.py", line 343, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/lib/python3.6/site-packages/urllib3/packages/six.py", line 686, in reraise raise value File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen chunked=chunked) File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib64/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib64/python3.6/http/client.py", line 964, in send self.connect() File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 196, in connect conn = self._new_conn() File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn self, "Failed to establish a new connection: %s" % e) urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9e5db5d208>: Failed to establish a new connection: [Errno 113] No route to host

    opened by laurentvv 19
  • Set the Content-Type header on requests

    Set the Content-Type header on requests

    Maybe I am misunderstanding the code, but it seems as if the actual http request (either urllib3 or requests) do no set the Content-Type. I would expect that at the very least, if the JSONSerializer was used that the Content-Type header would be set to "application/json" and something like "text/plain" if the payload is not JSON.

    If my understanding is incorrect, then my apologies and feel free to close this ticket. If this is a valid concern and you are looking for a PR then I would be able to supply something in the coming weeks.

    As a side note, it looks like you have added support for arbitrary headers in the master branch, I think this could probably be used to set the Content-Type, but it seems to me like something more fundamental that you would always want set.

    opened by bgroff 19
  • Regression in 6.4: Scroll failes with large scroll_id

    Regression in 6.4: Scroll failes with large scroll_id

    changes to the scoll method in 6.4 submits the scroll id as part of the URL. This causes:

    elasticsearch.exceptions.RequestError: RequestError(400, 'too_long_frame_exception', 'An HTTP line is larger than 4096 bytes.')

    When there are a large number of shards involved creating a large scroll id.

    https://github.com/elastic/elasticsearch-py/blob/99effab913c29ce341b3199f042bcb45cf8291a2/elasticsearch/client/init.py#L1341

    opened by ChrisPortman 18
  • Backport aiohttp conditional HEAD bug workaround

    Backport aiohttp conditional HEAD bug workaround

    In older versions of aiohttp, HEAD requests don't mark the connection as not reusable, which is why GET was used instead. But this means indices.exists becomes indices.get which can be prohibitively slow on indices with large mappings, so we want to use HEAD for aiohttp versions that don't have this bug.

    This backports the following pull requests to the 7.17 branch to make it easier for Rally to upgrade to elasticsearch-py 8.x:

    • https://github.com/elastic/elastic-transport-python/pull/55
    • https://github.com/elastic/elastic-transport-python/pull/58
    7.x 
    opened by pquentin 0
  •  BadRequestError(400, 'mapper_parsing_exception', 'Root mapping definition has unsupported parameters:

    BadRequestError(400, 'mapper_parsing_exception', 'Root mapping definition has unsupported parameters:

    Elasticsearch version (bin/elasticsearch --version): 8.5.0 elasticsearch-py version (elasticsearch.__versionstr__): 8.4.3 Please make sure the major version matches the Elasticsearch server you are running.

    Description of the problem including expected versus actual behavior: I run successfully the Elastic on my local, then moved to the step of creating indices and it turned out to not work

    Steps to reproduce:

    request_body={
      "mappings": {
        "properties": {
          "id": {
            "type":  "text"
          },
          "title": {
            "type":  "text"
          },
          "title_vector": {
            "type":  "dense_vector",
            "dims":768
          }
        }
      }
    }
    client.indices.create(index=index_name, mappings=request_body)
    

    Full info of error message

    ~/anaconda3/envs/###/lib/python3.7/site-packages/elasticsearch/_sync/client/_base.py in perform_request(self, method, path, params, headers, body)
        320 
        321             raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
    --> 322                 message=message, meta=meta, body=resp_body
        323             )
        324 
    
    BadRequestError: BadRequestError(400, 'mapper_parsing_exception', 'Root mapping definition has unsupported parameters:  [mappings : {properties={title_vector={dims=768, type=dense_vector}, id={type=text}, title={type=text}}}]')
    

    Have a nice day. Thank you for viewing my issue and response <3

    opened by lacls 1
  • Error connect host in elasticsearch is automatic

    Error connect host in elasticsearch is automatic

    Hello everyone, currently I have a problem for elasticsearch: After about a few days, elasticsearch gateway is automatically disconnected despite doing nothing in the Linux (Ubuntu 18.04) environment. I don't solve problem. Can anyone give me a solution ??

    Thanks you for watching!!

    opened by lehoangHUST 1
  • Rank Eval templates

    Rank Eval templates

    Allows for templates to be used in rank_eval as by the documentation (https://www.elastic.co/guide/en/elasticsearch/reference/current/search-rank-eval.html#_template_based_ranking_evaluation).

    Example:

    import elasticsearch
    
    es = elasticsearch.Elasticsearch("http://localhost:9200")
    
    templates = [
        {
            "id": "match_one_field_query",
            "template": {
                "inline": {"query": {"match": {"{{field}}": {"query": "{{query_string}}"}}}}
            },
        }
    ]
    
    requests = [
        {
            "id": "amsterdam_query",
            "ratings": [
                {"_index": "my-index-000001", "_id": "doc1", "rating": 0},
                {"_index": "my-index-000001", "_id": "doc2", "rating": 3},
                {"_index": "my-index-000001", "_id": "doc3", "rating": 1},
            ],
            "template_id": "match_one_field_query",
            "params": {"query_string": "amsterdam", "field": "text"},
        }
    ]
    
    metric = {"mean_reciprocal_rank": {"k": 20, "relevant_rating_threshold": 1}}
    
    result = es.rank_eval(
        requests=requests, metric=metric, index="my-index-000001", templates=templates
    )
    
    
    opened by Shadesfear 2
  • Missing some parameters in open_point_in_time

    Missing some parameters in open_point_in_time

    Elasticsearch version : 8.x

    elasticsearch-py version (elasticsearch.__versionstr__): 8.4.3

    Description of the problem including expected versus actual behavior: Steps to reproduce:

    We can still use routing parameter with open_point_in_time in Elasticsearch 8.x, however I got the TypeError from elasticsearch-py. I think it's a bug. Traceback is here:

    Traceback (most recent call last):
      File "es-py-sample\main.py", line 20, in <module>
        print_hi('PyCharm')
      File "es-py-sample\main.py", line 11, in print_hi
        res = es.open_point_in_time(index="wikidata",keep_alive="1m",routing="1")
      File "es-py-sample\venv\lib\site-packages\elasticsearch\_sync\client\utils.py", line 414, in wrapped
        return api(*args, **kwargs)
    TypeError: open_point_in_time() got an unexpected keyword argument 'routing'
    

    Until 7.17 elasticsearch-py, routing parameter and some other parameters support in open_point_in_time. https://github.com/elastic/elasticsearch-py/blob/7.17/elasticsearch/client/init.py#L2301-L2325

    After 8.0, missing routing and some parameters. https://github.com/elastic/elasticsearch-py/blob/8.0/elasticsearch/_sync/client/init.py#L2768

    Elasticsearch 8.4 still support routing, expand_wildcards, preferences params in open_point_in_time API. See rest-api-spec in Elasticsearch repo. Also these params are on Point in time API Reference and source in Es.

    Unfortunately, only 2 params are in elasticsearch-specification repo, I think that is the root cause...

    What do you think?

    opened by johtani 1
Releases(v8.5.1)
asyncio compatible driver for elasticsearch

asyncio client library for elasticsearch aioes is a asyncio compatible library for working with Elasticsearch The project is abandoned aioes is not su

null 97 Sep 5, 2022
Python cluster client for the official redis cluster. Redis 3.0+.

redis-py-cluster This client provides a client for redis cluster that was added in redis 3.0. This project is a port of redis-rb-cluster by antirez, w

Grokzen 1.1k Nov 4, 2022
Pure Python MySQL Client

PyMySQL Table of Contents Requirements Installation Documentation Example Resources License This package contains a pure-Python MySQL client library,

PyMySQL 7.1k Nov 15, 2022
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5k Nov 16, 2022
Redis Python Client

redis-py The Python interface to the Redis key-value store. Python 2 Compatibility Note redis-py 3.5.x will be the last version of redis-py that suppo

Andy McCurdy 10.9k Nov 18, 2022
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.7k Nov 20, 2022
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 550 Nov 2, 2022
Asynchronous Python client for InfluxDB

aioinflux Asynchronous Python client for InfluxDB. Built on top of aiohttp and asyncio. Aioinflux is an alternative to the official InfluxDB Python cl

Gustavo Bezerra 157 Sep 5, 2022
Google Cloud Client Library for Python

Google Cloud Python Client Python idiomatic clients for Google Cloud Platform services. Stability levels The development status classifier on PyPI ind

Google APIs 4k Nov 22, 2022
python-bigquery Apache-2python-bigquery (🥈34 · ⭐ 3.5K · 📈) - Google BigQuery API client library. Apache-2

Python Client for Google BigQuery Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google

Google APIs 532 Nov 20, 2022
Python client for InfluxDB

InfluxDB-Python InfluxDB-Python is a client for interacting with InfluxDB. Development of this library is maintained by: Github ID URL @aviau (https:/

InfluxData 1.6k Nov 22, 2022
Confluent's Kafka Python Client

Confluent's Python Client for Apache KafkaTM confluent-kafka-python provides a high-level Producer, Consumer and AdminClient compatible with all Apach

Confluent Inc. 3k Nov 23, 2022
MinIO Client SDK for Python

MinIO Python SDK for Amazon S3 Compatible Cloud Storage MinIO Python SDK is Simple Storage Service (aka S3) client to perform bucket and object operat

High Performance, Kubernetes Native Object Storage 567 Nov 15, 2022
Pysolr — Python Solr client

pysolr pysolr is a lightweight Python client for Apache Solr. It provides an interface that queries the server and returns results based on the query.

Haystack Search 625 Nov 16, 2022
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 4, 2022
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.1k Nov 23, 2022
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status ?? Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 64 Sep 27, 2022
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 5, 2022
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 37 Nov 2, 2022