PostgreSQL database access simplified

Overview

Queries: PostgreSQL Simplified

Queries is a BSD licensed opinionated wrapper of the psycopg2 library for interacting with PostgreSQL.

The popular psycopg2 package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the Usage section below to see how easy it can be.

Key features include:

  • Simplified API
  • Support of Python 2.7+ and 3.4+
  • PyPy support via psycopg2cffi
  • Asynchronous support for Tornado
  • Connection information provided by URI
  • Query results delivered as a generator based iterators
  • Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays
  • Ability to directly access psycopg2 connection and cursor objects
  • Internal connection pooling

Version Status Coverage License

Documentation

Documentation is available at https://queries.readthedocs.org

Installation

Queries is available via pypi and can be installed with easy_install or pip:

pip install queries

Usage

Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI of the PostgreSQL server to connect to when creating a session:

session = queries.Session("postgresql://postgres@localhost:5432/postgres")

Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting.

When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is fred and you omit the URI when issuing queries.query the URI that is constructed would be postgresql://fred@localhost:5432/fred.

If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods.

>>> queries.uri("server-name", 5432, "dbname", "user", "pass")
'postgresql://user:pass@server-name:5432/dbname'

Environment Variables

Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are:

  • QUERIES_MAX_POOL_SIZE - Modify the maximum size of the connection pool (default: 1)

Using the queries.Session class

To execute queries or call stored procedures, you start by creating an instance of the queries.Session class. It can act as a context manager, meaning you can use it with the with keyword and it will take care of cleaning up after itself. For more information on the with keyword and context managers, see PEP343.

In addition to both the queries.Session.query and queries.Session.callproc methods that are similar to the simple API methods, the queries.Session class provides access to the psycopg2 connection and cursor objects.

Using queries.Session.query

The following example shows how a queries.Session object can be used as a context manager to query the database table:

>>> import pprint
>>> import queries
>>>
>>> with queries.Session() as session:
...     for row in session.query('SELECT * FROM names'):
...         pprint.pprint(row)
...
{'id': 1, 'name': u'Jacob'}
{'id': 2, 'name': u'Mason'}
{'id': 3, 'name': u'Ethan'}

Using queries.Session.callproc

This example uses queries.Session.callproc to execute a stored procedure and then pretty-prints the single row results as a dictionary:

>>> import pprint
>>> import queries
>>> with queries.Session() as session:
...   results = session.callproc('chr', [65])
...   pprint.pprint(results.as_dict())
...
{'chr': u'A'}

Asynchronous Queries with Tornado

In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between queries.TornadoSession and queries.Session is the TornadoSession.query and TornadoSession.callproc methods return the entire result set instead of acting as an iterator over the results. The following example uses TornadoSession.query in an asynchronous Tornado web application to send a JSON payload with the query result set.

from tornado import gen, ioloop, web
import queries

class MainHandler(web.RequestHandler):

    def initialize(self):
        self.session = queries.TornadoSession()

    @gen.coroutine
    def get(self):
        results = yield self.session.query('SELECT * FROM names')
        self.finish({'data': results.items()})
        results.free()

application = web.Application([
    (r"/", MainHandler),
])

if __name__ == "__main__":
    application.listen(8888)
    ioloop.IOLoop.instance().start()

Inspiration

Queries is inspired by Kenneth Reitz's awesome work on requests.

History

Queries is a fork and enhancement of pgsql_wrapper, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0.

Comments
  • Password with %5D login failed

    Password with %5D login failed

    Hi,

    I found that my password include %5D and it failed to login. And the same password I use psycopg2 library is OK.

    It's some wrong my code or anything i ignored?

    Thanks.

    opened by lycheng 3
  • python script cannot run by windows task scheduler with a simple

    python script cannot run by windows task scheduler with a simple "import queries"

    Python script runs fine but not by windows task scheduler. It exits with (0x1) in task scheduler. try the following script:

    import logging import pandas as pd import sqlalchemy import pprint #import queries from urllib.request import Request, urlopen import datetime

    logging.basicConfig(filename='./logs/mylog.log', filemode='a', format='%(asctime)s.%(msecs)03d %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d,%H:%M:%S', level=logging.INFO)

    logging.info("===== process started =====")

    opened by meilao3rd 2
  • psycopg2 as a dependency

    psycopg2 as a dependency

    Installing queries via pip install psycopg2 as a dependency. However on many platforms now psycopg2-binary package must be used because of lack of PG dev packages available. For instance, AWS Lambda, Azure Functions etc.

    Therefore I believe that either psycopg2 should be removed as a direct dependency and let the user manually add the approprate package or offer a queries-binary variant.

    opened by louis77 2
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries. Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    Can be replicated by accessing the queries.Result object for an insert that inserts one or more rows without returning any results.

    Example:

    >>> result = session.query('insert into some_table (id) values (123)')
    >>> list(result)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/mnt/storage1/queries/queries/results.py", line 48, in __iter__
        for row in self.cursor:
      File "/mnt/storage1/queries/env/lib/python3.6/site-packages/psycopg2-2.7.5-py3.6-linux-x86_64.egg/psycopg2/extras.py", line 117, in __iter__
        first = next(res)
    psycopg2.ProgrammingError: no results to fetch
    

    Handling of the error should probably be abstracted.

    opened by nvllsvm 2
  • Close the connection on a connecting error

    Close the connection on a connecting error

    If an error message is returned from the server during the connecting phase, the socket will remain open after the error has been raised. The socket will continue to be polled forever, causing excessive CPU use.

    opened by cknave 2
  • Add easy way to retrieve info about session state

    Add easy way to retrieve info about session state

    I am using the TornadoSession with hundreds of queries and if somehow a result isn't freed, tracking it down is a nightmare. I put this together and was wondering if it would be useful to have in the core of the application (if there is a better way please let me know)

    
    import queries
    
    
    class MySession(queries.TornadoSession):
    
        def query(self, sql, parameters=None):
            self.sql = sql
            self.parameters = parameters
    
            return super(MySession, self).query(sql=sql, parameters=parameters)
    
        def info(self):
            info = []
    
            for pid, pool in self._pool_manager._pools.items():
                i = pool.idle_connections
                b = [pool.connections[k] for k in pool.connections
                     if pool.connections[k].busy]
                u = [pool.connections[k].used_by().sql for k in
                     pool.connections if pool.connections[k].busy]
    
                info.append({
                    'pid': pid,
                    'idle': len(i),
                    'busy': len(b),
                    'running_queries': u
                })
    
            return info
    
    

    If you ever wanted info about the session, you'd just call session.info()

    enhancement 
    opened by emehrkay 2
  • Is there an executemany using the TornadoSession?

    Is there an executemany using the TornadoSession?

    I have seen the concurrent queries example by wrapping the sql statements in a list but that only works if you know beforehand how many queries you are going to send. Is there a method to executemany such as i would do below?

    `cars = ( (1, 'Audi', 52642), (2, 'Mercedes', 57127), (3, 'Skoda', 9000), (4, 'Volvo', 29000), (5, 'Bentley', 350000), (6, 'Citroen', 21000), (7, 'Hummer', 41400), (8, 'Volkswagen', 21600) )

    cur.executemany(query, cars)`

    Thanks in advance!

    opened by Knights22 2
  • Stacking tornado_session.Results.close calls doesn't close any connection

    Stacking tornado_session.Results.close calls doesn't close any connection

    Maybe I am doing something wrong, but if I were to simply wait until later in the execution stack to call close on a tornado result, it never closes.

    conn = queries.queries.TornadoSession(pool_max_size=3)
    ....
    res1 = yield conn.result('some query')
    res2 = yield conn.result('some query')
    res3 = yield conn.result('some query')
    
    res1.free()
    res2.free()
    res3.free()
    
    res4 = yield conn.result('some query')
    # PoolFullError: Pool <queries.pool.Pool object at $ID> is at its maximum capacity
    res4.free()
    

    If I were to free a result before using the connection again, things work fine. (I even tested with a counter by inspecting the conn_pool_manager._pools dict)

    Am I missing something?

    opened by emehrkay 2
  • Allow iterating over empty query results

    Allow iterating over empty query results

    Added an ability to iterate on empty query results. And a couple of, probably, incorrect tests for this condition.

    Previously it was causing psycopg2.ProgrammingError: scroll destination out of bounds. One could overcome this by checking for truthy result before iterating, but I'm not keen.

    opened by den-t 2
  • Requirement psycopg2<2.8 prevents Python 3.8 support

    Requirement psycopg2<2.8 prevents Python 3.8 support

    See https://github.com/psycopg/psycopg2/issues/854, the current requirements of psycopg2>=2.5.1,<2.8 attempts to install psycopg2-2.7.7, but this version is incompatible with python 3.8 due to the above issue, which was fixed in 2.8

    opened by dparker2 1
  • Broken link in doc

    Broken link in doc

    in : https://queries.readthedocs.io/en/2.0.0/

    the link to "Module Index" (https://queries.readthedocs.io/en/2.0.0/py-modindex.html) is broken.

    opened by BFJonk 1
  • Support connection_factory?

    Support connection_factory?

    I'll start with the question as I am not entirely certain if it is even possible; Is there a way to create a queries.Session using a connection_factory? And if not, would it be possible to add such support?

    Or.... perhaps there is another way to accomplish the following...?

    In cases where I wanted to use a cursor which supported both a NamedTuple result and also using a LoggingCursor I would combine the two cursors but I would also pass in the connection_factory which sets up the connection to utilize logging for the cursor. Is there some way of doing this in queries?

    from psycopg2.extras import NamedTupleCursor, LoggingCursor, LoggingConnection
    
    
    class MyLoggingCursor(LoggingCursor):
        def execute(self, query, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).execute(query, vars)
    
        def callproc(self, procname, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).callproc(procname, vars)
    
    
    class MyLoggingConnection(LoggingConnection):
        def filter(self, msg, curs):
            duration = int((time.time() - curs.timestamp) * 1000)
            output = f"{msg}  ==> {curs.rowcount} rows, {duration:d} ms"
            return output
    
        def cursor(self, *args, **kwargs):
            kwargs.setdefault('cursor_factory', MixinLoggedNamedTupleCursor)
            return LoggingConnection.cursor(self, *args, **kwargs)
    
    
    class MixinLoggedNamedTupleCursor(MyLoggingCursor, NamedTupleCursor):
        pass
    
    
    db_conn = psycopg2.connect(host=db_host, port=db_port,
                               user=db_user, password=db_pass,
                               database=db_name, 
                               connect_timeout=timeout,
                               connection_factory=MyLoggingConnection
                               )
    db_conn.initialize(logger)
    
    opened by bandophahita 0
  • Simple query fails if no rows found

    Simple query fails if no rows found

    Most simple example from docs:

    for row in session.query('SELECT * FROM foo'):
        print row
    

    Fails when there are no rows in the table:

    Traceback (most recent call last):
      File "venv/lib/python3.9/site-packages/queries/results.py", line 45, in __iter__
        raise StopIteration
    StopIteration
    

    Expected result: for loop should just pass without errors

    opened by borislopezaraoz 0
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries.Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    opened by nvllsvm 2
  • TornadoSession as a context manager

    TornadoSession as a context manager

    Hello ! First of all thank you for the work you put in your lib !

    We spotted a strange behavior when using queries.TornadoSession in a context manager: The connection pool is not freed when exiting the with block. (This isn't the case for queries.Session).

    It is true that the documentation encourages to use result.free() when working with asynchronous sessions but IMHO it would be a nice feature if both Session classes would behave the same.

    Cheers !

    enhancement 
    opened by Morreski 3
Releases(2.0.0)
  • 2.0.0(Jan 29, 2018)

    • REMOVED support for Python 2.6
    • FIXED CPU Pegging bug: Cleanup IOLoop and internal stack in TornadoSession on connection error. In the case of a connection error, the failure to do this caused CPU to peg @ 100% utilization looping on a non-existent file descriptor. Thanks to cknave for his work on identifying the issue, proposing a fix, and writing a working test case.
    • Move the integration tests to use a local docker development environment
    • Added new methods queries.pool.Pool.report and queries.pool.PoolManager.Report for reporting pool status.
    • Added new methods to queries.pool.Pool for returning a list of busy, closed, executing, and locked connections.
    Source code(tar.gz)
    Source code(zip)
Owner
Gavin M. Roy
CTO @aweber. Technology-minded geek who enjoys Python & Erlang, music and gaming. Author of RabbitMQ in Depth.
Gavin M. Roy
aiopg is a library for accessing a PostgreSQL database from the asyncio

aiopg aiopg is a library for accessing a PostgreSQL database from the asyncio (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycop

aio-libs 1.3k Jan 3, 2023
PostgreSQL database adapter for the Python programming language

psycopg2 - Python-PostgreSQL Database Adapter Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main fe

The Psycopg Team 2.8k Jan 5, 2023
Python DBAPI simplified

Facata A Python library that provides a simplified alternative to DBAPI 2. It provides a facade in front of DBAPI 2 drivers. Table of Contents Install

Tony Locke 44 Nov 17, 2021
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Dec 30, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Dec 31, 2022
Pure-python PostgreSQL driver

pg-purepy pg-purepy is a pure-Python PostgreSQL wrapper based on the anyio library. A lot of this library was inspired by the pg8000 library. Credits

Lura Skye 11 May 23, 2022
A Python wheel containing PostgreSQL

postgresql-wheel A Python wheel for Linux containing a complete, self-contained, locally installable PostgreSQL database server. All servers run as th

Michel Pelletier 71 Nov 9, 2022
Application which allows you to make PostgreSQL databases with Python

Automate PostgreSQL Databases with Python Application which allows you to make PostgreSQL databases with Python I used the psycopg2 library which is u

Marc-Alistair Coffi 0 Dec 31, 2021
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
MySQL database connector for Python (with Python 3 support)

mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi

PyMySQL 2.2k Dec 25, 2022
aiomysql is a library for accessing a MySQL database from the asyncio

aiomysql aiomysql is a "driver" for accessing a MySQL database from the asyncio (PEP-3156/tulip) framework. It depends on and reuses most parts of PyM

aio-libs 1.5k Jan 3, 2023
Async database support for Python. 🗄

Databases Databases gives you simple asyncio support for a range of databases. It allows you to make queries using the powerful SQLAlchemy Core expres

Encode 3.2k Dec 30, 2022
The Database Toolkit for Python

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

SQLAlchemy 6.5k Jan 1, 2023
MySQL database connector for Python (with Python 3 support)

mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi

PyMySQL 2.2k Dec 25, 2022
Python interface to Oracle Database conforming to the Python DB API 2.0 specification.

cx_Oracle version 8.2 (Development) cx_Oracle is a Python extension module that enables access to Oracle Database. It conforms to the Python database

Oracle 841 Dec 21, 2022
A HugSQL-inspired database library for Python

PugSQL PugSQL is a simple Python interface for using parameterized SQL, in files. See pugsql.org for the documentation. To install: pip install pugsql

Dan McKinley 558 Dec 24, 2022
The JavaScript Database, for Node.js, nw.js, electron and the browser

The JavaScript Database Embedded persistent or in memory database for Node.js, nw.js, Electron and browsers, 100% JavaScript, no binary dependency. AP

Louis Chatriot 13.2k Jan 2, 2023
Query multiple mongoDB database collections easily

leakscoop Perform queries across multiple MongoDB databases and collections, where the field names and the field content structure in each database ma

bagel 5 Jun 24, 2021
A tiny python web application based on Flask to set, get, expire, delete keys of Redis database easily with direct link at the browser.

First Redis Python (CRUD) A tiny python web application based on Flask to set, get, expire, delete keys of Redis database easily with direct link at t

Max Base 9 Dec 24, 2022