aiopg is a library for accessing a PostgreSQL database from the asyncio

Overview

aiopg

Chat on Gitter

aiopg is a library for accessing a PostgreSQL database from the asyncio (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver.

Example

import asyncio
import aiopg

dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'

async def go():
    pool = await aiopg.create_pool(dsn)
    async with pool.acquire() as conn:
        async with conn.cursor() as cur:
            await cur.execute("SELECT 1")
            ret = []
            async for row in cur:
                ret.append(row)
            assert ret == [(1,)]

loop = asyncio.get_event_loop()
loop.run_until_complete(go())

Example of SQLAlchemy optional integration

import asyncio
from aiopg.sa import create_engine
import sqlalchemy as sa

metadata = sa.MetaData()

tbl = sa.Table('tbl', metadata,
    sa.Column('id', sa.Integer, primary_key=True),
    sa.Column('val', sa.String(255)))

async def create_table(engine):
    async with engine.acquire() as conn:
        await conn.execute('DROP TABLE IF EXISTS tbl')
        await conn.execute('''CREATE TABLE tbl (
                                  id serial PRIMARY KEY,
                                  val varchar(255))''')

async def go():
    async with create_engine(user='aiopg',
                             database='aiopg',
                             host='127.0.0.1',
                             password='passwd') as engine:

        async with engine.acquire() as conn:
            await conn.execute(tbl.insert().values(val='abc'))

            async for row in conn.execute(tbl.select()):
                print(row.id, row.val)

loop = asyncio.get_event_loop()
loop.run_until_complete(go())

Please use:

$ make test

for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests.

Comments
  • close cannot be used while an asynchronous query is underway

    close cannot be used while an asynchronous query is underway

    I'm getting this error if I do multiple fetchs. I'm not sure what I'm doing wrong:

    
    Exception ignored in: <function ResultProxy.__init__.<locals>.<lambda> at 0x7f979638fe18>
    Traceback (most recent call last):
      File "/home/anthbot/Pyvenv/anthbotv5/lib/python3.5/site-packages/aiopg/sa/result.py", line 235, in <lambda>
        self._weak = weakref.ref(self, lambda wr: cursor.close())
      File "/home/anthbot/Pyvenv/anthbotv5/lib/python3.5/site-packages/aiopg/cursor.py", line 50, in close
        self._impl.close()
    psycopg2.ProgrammingError: close cannot be used while an asynchronous query is underway
    

    This is the query:

    query = sa.select([role_item]).where(role_item.c.auto_role_id == auto_role_id)
            return await fetch_ite(query)
    

    This is the function:

    async def _fetch(query, type=DBType.ONE):
        async with engine.acquire() as conn:
            res = await conn.execute(query)
            if type == DBType.ONE:
                return await res.fetchone()
            elif type == DBType.ALL:
                return await res.fetchall()
            elif type == DBType.ITE:
                return res
            return None
    

    Any help would be appreciated.

    opened by ingmferrer 52
  • Make engine.acquire() reentrant

    Make engine.acquire() reentrant

    currently acquiring a connection from the pool in a nested manner is deadlock-prone. See #113 #126 and #127

    Sometimes (e.g when implementing middlewares or forms validators, or any other feature plugged in existing frameworks) is just difficult to pass the connection instance down the call stack.

    I propose a new feature where engine.acquire() returns the same already acquired connection if called again inside a engine.acquire() context by the same asyncio task.

    this would mean this test would pass

    
    async def test_nested_acquire(event_loop):
        async with engine.acquire() as conn1:
            async with engine.acquire() as conn2:
                assert conn1 == conn2
    
    opened by mpaolini 50
  • connection.cancel can hang

    connection.cancel can hang

    we noticed this callstack in our prod server which hung our main thread breaking the server:

    Jul 30 16:48:45.114            File "/usr/local/lib/python3.6/asyncio/base_events.py", line 422 in run_forever
    Jul 30 16:48:45.114            File "/usr/local/lib/python3.6/asyncio/base_events.py", line 1434 in _run_once
    Jul 30 16:48:45.114            File "/usr/local/lib/python3.6/asyncio/events.py", line 145 in _run
    Jul 30 16:48:45.114            File "/usr/local/lib/python3.6/site-packages/aiopg/connection.py", line 226 in cancel
    

    suggest loop.run_in_executor'ing it as supposedly this method is thread safe. Ideally there would be a timeout that affects this as well. I'm not sure if the DSN connect_timeout applies

    pr-available 
    opened by thehesiod 30
  • aiopg 1.1.0+ is incompatible with SQLAlchemy 1.4

    aiopg 1.1.0+ is incompatible with SQLAlchemy 1.4

    Aiopg will automatically download SQLAlchemy 1.4.0(Released March 15) which leads to crazy errors due to incompatibility. Make sure to freeze SQLAlchemy==1.3.23.

    Two specific issues i noticed:

    1. Failure when doing a select. What i found that the query is absolutely valid and Postgres returns what is expected, but the RowProxy fails to map the columns.
    self = <[InvalidRequestError("Ambiguous column name 'None' in result set! try 'use_labels' option on select statement.") raised in repr()] RowProxy object at 0x7fccab231460>
    key = 'brands_id'
    
         def __getitem__(self, key):
             try:
    >           processor, obj, index = self._keymap[key]
    E           KeyError: 'brands_id'
    
    /usr/local/lib/python3.7/site-packages/aiopg/sa/result.py:27: KeyError
    
    During handling of the above exception, another exception occurred:
    
    tables = None, sa_engine = <aiopg.sa.engine.Engine object at 0x7fccab2813d0>
    
    1. Failure when Deleting from table. Honestly i'm not sure if aiopg is responsible for that, it may very well be psycopg2.
    E           psycopg2.errors.SyntaxError: syntax error at or near "["
    E           LINE 1: ...ETE FROM materials WHERE materials.product_id IN ([POSTCOMPI...
    E                                                                        ^
    
    /usr/local/lib/python3.7/site-packages/aiopg/connection.py:106: SyntaxError
    
    opened by Velikolay 23
  • ensure connection is released

    ensure connection is released

    I've seen cases where the close can throw psycopg2.ProgrammingError: close cannot be used while an asynchronous query is underway which would previously cause the connection to not be returned to the pool, and cause a connection to "leak" from the pool. See discussion in: https://github.com/aio-libs/aiopg/issues/364

    This also fixes running tests on OSX

    opened by thehesiod 22
  • Add transaction model aiopg

    Add transaction model aiopg

    Todo task

    • [x] three isolation level pg
    • [x] Nested transactions pg
    • [x] exemples new style
    • [x] exemples old style
    • [x] finalization warning
    • [x] test cover
    • [x] will be implemented in issues https://github.com/aio-libs/aiopg/issues/407
    • [x] add __slots__
    • [x] fix aiopg/transaction.py:21
    opened by vir-mir 20
  • Remove callbacks from a bad file descriptor immediately

    Remove callbacks from a bad file descriptor immediately

    Previously, the callback removal was postponed until .close() call, and sometimes this .close() call cleared a file descriptor from a callback set by another connection object (if the file desriptor was closed by psycopg2 when an error occurred, and then opened again for a new connection).

    The effect of this was, when making tens of parallel connections, an error in one of them made another time out.

    Now, if a psycopg2 error occurs, and the file descriptor has gone bad, the callback is removed immediately, and the aiopg connection object 'forgets' about this file descriptor (self._fileno = None), and doesn't try to remove callbacks in .close().

    If there were no errors, callbacks are still removed during .close() call.

    Fixes #138.

    opened by not-even 19
  • Queries start failing after some time

    Queries start failing after some time

    I am facing an issue where my db queries start failing after sometime. Here is some code that I use to create a pool

    class PostgresStore:
        _pool = None
        _connection_params = {}
    
        @classmethod
        def connect(cls, database:str, user:str, password:str, host:str, port:int):
            """
            Sets connection parameters
            """
            cls._connection_params['database'] = database
            cls._connection_params['user'] = user
            cls._connection_params['password'] = password
            cls._connection_params['host'] = host
            cls._connection_params['port'] = port
    
        @classmethod
        def use_pool(cls, pool:Pool):
            """
            Sets an existing connection pool instead of using connect() to make one
            """
            cls._pool = pool
    
        @classmethod
        @coroutine
        def get_pool(cls) -> Pool:
            """
            Yields:
                existing db connection pool
            """
            if len(cls._connection_params) < 5:
                raise ConnectionError('Please call SQLStore.connect before calling this method')
            if not cls._pool:
                cls._pool = yield from create_pool(**cls._connection_params)
            return cls._pool
    

    I use aiohttp to create a web server and once the server is up and running for a few hours, db quesries start failing. All other apis work perfectly fine. Here are the logs attached:

    2015-07-14 16:19:18,531 ERROR [base_events:698] Fatal read error on socket transport
    protocol: 
    transport: 
    Traceback (most recent call last):
      File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready
        data = self._sock.recv(self.max_size)
    TimeoutError: [Errno 110] Connection timed out
    2015-07-14 17:26:42,070 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback
    connection: 
    2015-07-14 17:26:58,017 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback
    connection: 
    2015-07-14 17:27:02,606 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback
    connection: 
    2015-07-14 17:27:03,226 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback
    connection: 
    2015-07-14 17:27:14,691 ERROR [base_events:698] Fatal error on aiopg connection: bad state in _ready callback
    connection: 
    2015-07-14 18:47:51,427 ERROR [base_events:698] Fatal read error on socket transport
    protocol: 
    transport: 
    Traceback (most recent call last):
      File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready
        data = self._sock.recv(self.max_size)
    TimeoutError: [Errno 110] Connection timed out
    2015-07-14 18:50:02,499 ERROR [base_events:698] Fatal read error on socket transport
    protocol: 
    transport: 
    Traceback (most recent call last):
      File "/usr/lib/python3.4/asyncio/selector_events.py", line 459, in _read_ready
        data = self._sock.recv(self.max_size)
    TimeoutError: [Errno 110] Connection timed out
    
    opened by nerandell 19
  • Can't acquire connection from pool

    Can't acquire connection from pool

    Hi!

    I have got the problem with aiopg when acquiring new connection from pool. I have used a simple code for this:

    import asyncio
    
    import aiopg.sa
    import sqlalchemy as sa
    
    import tornado.web
    import tornado.platform.asyncio
    
    async def create_engine():
        return await aiopg.sa.create_engine(
            'dbname=dbname user=user password=password host=127.0.0.1',
            echo=True
        )
    
    loop = asyncio.get_event_loop()
    engine = loop.run_until_complete(create_engine())
    metadata = sa.MetaData()
    
    
    t1 = sa.Table('t1', metadata,
                  sa.Column('id', sa.Integer, primary_key=True),
                  sa.Column('name', sa.String(255), nullable=False))
    
    
    t2 = sa.Table('t2', metadata,
                  sa.Column('id', sa.Integer, primary_key=True),
                  sa.Column('name', sa.String(255), nullable=False))
    
    
    async def fetch_t2():
        async with engine.acquire() as conn:
            await conn.execute(t2.select().where(t2.c.id == 4))
    
    
    class ReqHandler(tornado.web.RequestHandler):
        async def post(self):
            async with engine.acquire() as conn:
                async with conn.begin():
                    await conn.execute(t1.select().where(t1.c.id == 1))
                    await fetch_t2()
                    await conn.execute(t1.insert().values(name='some name'))
    
            self.write("Hello world!\n")
    
    
    app = tornado.web.Application([
        (r'/', ReqHandler)
    ])
    
    if __name__ == '__main__':
        tornado.platform.asyncio.AsyncIOMainLoop().install()
        app.listen(8080)
        loop.run_forever()
    

    Then I provide a load of 100 concurrent requests and after a while the service hangs. It seems to me that aiopg can not get another connection from the pool, because when I increase the maximum pool size load test has been passed

    I have used the following:

    aiopg-0.9.2 psycopg2-2.6.1 SQLAlchemy-1.0.12 tornado-4.3

    invalid 
    opened by serg666 18
  • Migrate from

    Migrate from "yield from" to await (TypeError: object Engine can't be used in 'await' expression)

    Hi, i replaced in my code "yield from" to "await", and received Traceback: "TypeError: object Engine can't be used in 'await' expression"

    async def db_psql_middleware(app, handler):
        async def middleware(request):
            db = app.get('db_psql')
            if not db:
                app['db_psql'] = db = await create_engine(app['psql_dsn'], minsize=1, maxsize=5)
            request.app['db_psql'] = db
            return (await handler(request))
        return middleware
    
    
    async def psql_select(request):
        with (await request.app['db_psql']) as conn:
            result = await conn.execute(models.select())
    

    Traceback

    [2015-09-17 14:50:29 +0300] [26045] [ERROR] Error handling request
    Traceback (most recent call last):
      File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/server.py", line 272, in start
        yield from self.handle_request(message, payload)
      File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/web.py", line 85, in handle_request
        resp = yield from handler(request)
      File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 39, in middleware
        return (await handler(request))
      File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 46, in psql_select
        with (await request.app['db_psql']) as conn:
    TypeError: object Engine can't be used in 'await' expression
    
    opened by vvv-v13 18
  • Fix

    Fix "Unable to detect disconnect when using NOTIFY/LISTEN", Closes #249

    What do these changes do?

    When connection to DB is closed, any tasks pending on await connection.notifies.get() receives an exception, instead of hanging forever.

    Are there changes in behavior for the user?

    Now connection.notifies.get() can raise a psycopg2 exception, whereas before this would hang forever.

    Related issue number

    #249

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> (e.g. 588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the PR
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: Fix issue with non-ascii contents in doctest text files.
    opened by gjcarneiro 17
  • AttributeError: 'Connection' object has no attribute 'send'

    AttributeError: 'Connection' object has no attribute 'send'

    Describe the bug

    When using _ContextManager.send AttributeError is raised.

    To Reproduce

    import aiopg
    import asyncio
    
    
    async def test():
        con = aiopg.connect()
        con.send(None)
    
    asyncio.run(test())
    

    Expected behavior

    It shouldn't crash

    Logs/tracebacks

    Traceback (most recent call last):
      File "/tmp/t.py", line 9, in <module>
        asyncio.run(test())
      File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
        return future.result()
      File "/tmp/t.py", line 7, in test
        con.send(None)
      File "/home/llist/.local/lib/python3.9/site-packages/aiopg/utils.py", line 61, in send
        return self._coro.send(value)
    AttributeError: 'Connection' object has no attribute 'send'
    

    Python Version

    Python 3.9.2
    

    aiopg Version

    Name: aiopg
    Version: 1.4.0
    Summary: Postgres integration with asyncio.
    Home-page: https://aiopg.readthedocs.io
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: BSD
    Location: /home/llist/.local/lib/python3.9/site-packages
    Requires: psycopg2-binary, async-timeout
    Required-by:
    

    OS

    Linux home 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by llistochek 0
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi aio-libs/aiopg!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • Close connection when _fill_free_pool is cancelled.

    Close connection when _fill_free_pool is cancelled.

    Cancelling _fill_free_pool before a new connection is added to self._free prevents it from being cleaned up which causes a lingering connection and eventually the postgresql server running out of sockets.

    What do these changes do?

    Are there changes in behavior for the user?

    Related issue number

    Checklist

    • [ ] I think the code is well written
    • [ ] Unit tests for the changes exist
    • [ ] Documentation reflects the changes
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> (e.g. 588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the PR
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: Fix issue with non-ascii contents in doctest text files.
    opened by iksteen 1
  • Roadmap after psycopg3 release

    Roadmap after psycopg3 release

    Is your feature request related to a problem?

    With the release of psycopg3 and its async capabilities, isn't this wrapper for the good-old synchronous psycopg2 unnecessary anymore?

    Describe the solution you'd like

    At least I'd expect aiopg to mention its relation with psycopg3 and maybe declare sunset?

    Describe alternatives you've considered

    During the timespan where other libraries that could rely on aiopg won't yet move to psycopg3, maybe aiopg2 should become a thin wrapper for psycopg3? E.g. SQLAlchemy won't support psycopg3 until their 2.0 release https://github.com/sqlalchemy/sqlalchemy/milestone/88

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    enhancement 
    opened by n1ngu 2
  • SAConnection twophase methods are broken

    SAConnection twophase methods are broken

    Describe the bug

    1. begin_twophase() is not a context manager
    2. recover_twophase() is broken
    3. rollback_prepared() is broken

    To Reproduce

    just use these methods)

    Expected behavior

    1. begin_twophase() behaves as a context manager, using commit_prepared() at the end OR docs changed to reflect current implementation
    2. does not crash on accessing ResultProxy
    3. does not crash on string formatting

    Logs/tracebacks

    1. begin_context() is not a context

    According to docs: https://aiopg.readthedocs.io/en/stable/sa.html#aiopg.sa.SAConnection.begin_twophase

    coroutine async-with begin_twophase(xid=None)

    but in reality:

        async with conn.begin_twophase() as transaction:
    AttributeError: __aenter__
    

    2. recover_twophase() is broken

    result is not awaited:

      File "/Users/ovmikhaylov/work/aiopg/aiopg/sa/connection.py", line 363, in recover_twophase
        return [row[0] for row in result]
    TypeError: 'ResultProxy' object is not iterable
    

    3. rollback_prepared() is broken

    Due to error in f-string:

      File "/Users/ovmikhaylov/work/aiopg/aiopg/sa/connection.py", line 368, in rollback_prepared
        await self.execute(f"ROLLBACK PREPARED {xid:!r}")
    ValueError: Invalid format specifier
    

    Python Version

    3.10
    

    aiopg Version

    1.3.3
    

    OS

    Darwin gmbp.local 20.6.0 Darwin Kernel Version 20.6.0: Tue Oct 12 18:33:42 PDT 2021; root:xnu-7195.141.8~1/RELEASE_X86_64 x86_64

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by gistart 0
Releases(v1.4.0)
  • v1.4.0(Oct 27, 2022)

    What's Changed

    • Add python 3.11 and drop python 3.6 support by @Pliner in https://github.com/aio-libs/aiopg/pull/892

    Full Changelog: https://github.com/aio-libs/aiopg/compare/v1.3.5...v1.4.0

    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Mar 25, 2021)

    Changes

    • Implement timeout on acquiring connection from pool(#766)

    • Deprecate blocking connection.cancel() method (#570)

    • Fix IsolationLevel.read_committed and introduce IsolationLevel.default (#770)

    • Fix python 3.8 warnings in tests (#771)

    • Don't run ROLLBACK when the connection is closed (#778)

    • Multiple cursors support (#801)

    • Set max supported sqlalchemy version (#805)

    • Pop loop in connection init due to backward compatibility (#808)

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Dec 9, 2020)

    Changes

    • Fix on_connect multiple call on acquire(#552)

    • Fix python 3.8 warnings(#622)

    • Bump minimum psycopg version to 2.8.4(#754)

    • Fix Engine.release method to release connection in any way(#756)

    • Added missing slots to context managers (#763)

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Sep 23, 2019)

  • v0.16.0(Jan 25, 2019)

    Changes

    • Fix select priority name (#525)

    • Rename psycopg2 to psycopg2-binary to fix deprecation warning (#507)

    • Fix #189 hstore when using ReadDictCursor (#512)

    • close cannot be used while an asynchronous query is underway (#452)

    • sqlalchemy adapter trx begin allow transaction_mode (#498)

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Aug 14, 2018)

  • v0.14.0(May 11, 2018)

  • v0.13.2(Jan 3, 2018)

  • v0.13.1(Sep 10, 2017)

  • v0.13.0(Dec 2, 2016)

    Changes

    • Add async with support to .begin_nested() #208
    • Fix connection.cancel() #212 #223
    • Raise informative error on unexpected connection closing #191
    • Added support for python types columns issues #217
    • Added support for default values in SA table issues #206
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Sep 12, 2016)

  • v0.10.0(Jul 16, 2016)

    Connection pool is more stable now.

    Changes

    • Refactor tests to use dockerized Postgres server #107
    • Reduce default pool minsize to 1 #106
    • Explicitly enumerate packages in setup.py #85
    • Remove expired connections from pool on acquire #116
    • Don't crash when Connection is GC'ed #124
    • Use loop.create_future() if available
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Jan 31, 2016)

  • v0.9.1(Jan 17, 2016)

  • v0.9.0(Jan 14, 2016)

    Added support for async/await syntax into SQLAlchemy layer

    Changes

    • Add async context managers for transactions #91
    • Support async iterator in ResultProxy #92
    • Add async with for engine #90
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Dec 31, 2015)

    Fixed a bug with processing timeouts, added support for async with statements in core API.

    SQLAlchemy layer is not converted yet.

    Full list of changes:

    • Add PostgreSQL notification support #58
    • Support pools with unlimited size #59
    • Cancel current DB operation on asyncio timeout #66
    • Add async with support for Pool, Connection, Cursor #88
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 22, 2015)

    Major aiopg 0.7.0 release.

    CHANGES

    • Get rid of resource leak on connection failure.
    • Report ResourceWarning on non-closed connections.
    • Deprecate iteration protocol support in cursor and ResultProxy.
    • Release sa connection to pool on connection.close().
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Feb 3, 2015)

  • v0.5.2(Dec 8, 2014)

  • v0.5.1(Oct 31, 2014)

  • v0.5.0(Oct 31, 2014)

    Major release 0.5

    Pool reimplemented to never exceed pool max size.

    Added method to close/terminate pool and engine and wait for closing.

    Full list of changes:

    • Add .terminate() to Pool and Engine
    • Reimplement connection pool (now pool size cannot be greater than pool.maxsize)
    • Add .close() and .wait_closed() to Pool and Engine
    • Add minsize, maxsize, size and freesize properties to sa.Engine
    • Support echo parameter for logging executed SQL commands
    • Connection.close() is not a coroutine (but we keep backward compatibility).
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Oct 2, 2014)

    Minor release.

    • Documentation updated

    • aiopg.cursor instances are iterable now, you can fetch SELECT results in the following way:

      yield from cur.execute("SELECT * FROM tbl") for item in cur: process_item(item)

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 2, 2014)

    I proud to announce new aiopg release 0.4.0. This is major release, I highly recommend to switch on it.

    The main features are:

    • support for JSON and HSTORE PostgreSQL types
    • accepting extended SqlAlchemy types (with proper conversions) as input parameter for SELECT/INSERT/UPDATE etc.
    • added timeouts for SQL queries
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Jun 12, 2014)

    This is bugfix release intended to fix a bug in connection pool implementation (see #14).

    You can find the documentation here:

    http://aiopg.readthedocs.org/en/0.2/

    Latest version is also available on Pypi:

    https://pypi.python.org/pypi/aiopg/0.2.3

    Source code(tar.gz)
    Source code(zip)
Owner
aio-libs
The set of asyncio-based libraries built with high quality
aio-libs
A fast PostgreSQL Database Client Library for Python/asyncio.

asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio asyncpg is a database interface library designed specifically for PostgreSQL a

magicstack 5.8k Dec 31, 2022
aioodbc - is a library for accessing a ODBC databases from the asyncio

aioodbc aioodbc is a Python 3.5+ module that makes it possible to access ODBC databases with asyncio. It relies on the awesome pyodbc library and pres

aio-libs 253 Dec 31, 2022
PostgreSQL database access simplified

Queries: PostgreSQL Simplified Queries is a BSD licensed opinionated wrapper of the psycopg2 library for interacting with PostgreSQL. The popular psyc

Gavin M. Roy 251 Oct 25, 2022
PostgreSQL database adapter for the Python programming language

psycopg2 - Python-PostgreSQL Database Adapter Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main fe

The Psycopg Team 2.8k Jan 5, 2023
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Dec 30, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Dec 31, 2022
Pure-python PostgreSQL driver

pg-purepy pg-purepy is a pure-Python PostgreSQL wrapper based on the anyio library. A lot of this library was inspired by the pg8000 library. Credits

Lura Skye 11 May 23, 2022
A Python wheel containing PostgreSQL

postgresql-wheel A Python wheel for Linux containing a complete, self-contained, locally installable PostgreSQL database server. All servers run as th

Michel Pelletier 71 Nov 9, 2022
Application which allows you to make PostgreSQL databases with Python

Automate PostgreSQL Databases with Python Application which allows you to make PostgreSQL databases with Python I used the psycopg2 library which is u

Marc-Alistair Coffi 0 Dec 31, 2021
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 2.1k Dec 26, 2022
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 1.6k Feb 6, 2021
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 4, 2023
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 554 Dec 4, 2022
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 5, 2022
asyncio compatible driver for elasticsearch

asyncio client library for elasticsearch aioes is a asyncio compatible library for working with Elasticsearch The project is abandoned aioes is not su

null 97 Sep 5, 2022
Asynchronous interface for peewee ORM powered by asyncio

peewee-async Asynchronous interface for peewee ORM powered by asyncio. Important notes Since version 0.6.0a only peewee 3.5+ is supported If you still

05Bit 666 Dec 30, 2022
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 27, 2022
Familiar asyncio ORM for python, built with relations in mind

Tortoise ORM Introduction Tortoise ORM is an easy-to-use asyncio ORM (Object Relational Mapper) inspired by Django. Tortoise ORM was build with relati

Tortoise 3.3k Dec 31, 2022