A wrapper around asyncpg for use with sqlalchemy

Overview

Documentation Status

asyncpgsa

A python library wrapper around asyncpg for use with sqlalchemy

Backwards incompatibility notice

Since this library is still in pre 1.0 world, the api might change. I will do my best to minimize changes, and any changes that get added, I will mention here. You should lock the version for production apps.

  1. 0.9.0 changed the dialect from psycopg2 to pypostgres. This should be mostly backwards compatible, but if you notice weird issues, this is why. You can now plug-in your own dialect using pg.init(..., dialect=my_dialect), or setting the dialect on the pool. See the top of the connection file for an example of creating a dialect. Please let me know if the change from psycopg2 to pypostgres broke you. If this happens enough, I might make psycopg2 the default.

  2. 0.18.0 Removes the Record Proxy objects that would wrap asyncpg's records. Now asyncpgsa just returns whatever asyncpg would return. This is a HUGE backwards incompatible change but most people just used record._data to get the object directly anyways. This means dot notation for columns is no longer possible and you need to access columns using exact names with dictionary notation.

  3. 0.18.0 Removed the insert method. We found this method was just confusing, and useless as SqlAlchemy can do it for you by defining your table with a primary key.

sqlalchemy ORM

Currently this repo does not support SA ORM, only SA Core.

As we at canopy do not use the ORM, if you would like to have ORM support feel free to PR it. You would need to create an "engine" interface, and that should be it. Then you can bind your sessions to the engine.

sqlalchemy Core

This repo supports sqlalchemy core. Go here for examples.

Docs

Go here for docs.

Examples

Go here for examples.

install

pip install asyncpgsa

Note: You should not have asyncpg in your requirements at all. This lib will pull down the correct version of asyncpg for you. If you have asyncpg in your requirements, you could get a version newer than this one supports.

Contributing

To contribute or build this locally see contributing.md

FAQ

Does SQLAlchemy integration defeat the point of using asyncpg as a backend (performance)?

I dont think so. asyncpgsa is written in a way where any query can be a string instead of an SA object, then you will get near asyncpg speeds, as no SA code is ran.

However, when running SA queries, comparing this to aiopg, it still seams to work faster. Here is a very basic timeit test comparing the two. https://gist.github.com/nhumrich/3470f075ae1d868f663b162d01a07838

aiopg.sa: 9.541276566000306
asyncpsa: 6.747777451004367

So, seems like its still faster using asyncpg, or in otherwords, this library doesnt add any overhead that is not in aiopg.sa.

Versioning

This software follows Semantic Versioning.

Comments
  • Improved record type.

    Improved record type.

    Adds support to asyncpgsa Record type for all the methods allowed in asyncpg Record type.

    This change allows to use subscript operator and SQL Alchemy Core tables columns as indexes for the subscript.

    On top of:

    row.type_id
    1
    

    You can now:

    row['type_id']
    1
    row[models.t_table.c.type_id]
    1
    

    It calls asyncpg Record for every other method call not implemented here, so you can use row.keys() or row.values() or row_list = [ dict(p) for p in rows ], the last being useful for serialization for example.

    Related to #32

    opened by skuda 8
  • Add an example involving range types

    Add an example involving range types

    I have a table like the following:

    CREATE TABLE users (
      id SERIAL,
      name VARCHAR(15),
      active DATERANGE,
    
      PRIMARY KEY (id)
    )
    

    that with plain SQLAlchemy+psycopg2 I can query with:

        today = date.today()
        q = select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains(today))
        r = session.execute(q)
    

    With the following asyncpgsa transliteration:

        today = date.today()
        q = sa.select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains(today))
        r = await pg.fetchrow(q)
    

    I get an error:

    /usr/local/lib/python3.5/site-packages/asyncpgsa/pgsingleton.py:65: in fetchrow
        return await conn.fetchrow(query, *args, timeout=timeout)
    /usr/local/lib/python3.5/site-packages/asyncpgsa/connection.py:91: in fetchrow
        result = await self.connection.fetchrow(query, *params, *args, **kwargs)
    /usr/local/lib/python3.5/site-packages/asyncpg/connection.py:259: in fetchrow
        False, timeout)
    asyncpg/protocol/protocol.pyx:157: in bind_execute (asyncpg/protocol/protocol.c:45856)
        ???
    asyncpg/protocol/prepared_stmt.pyx:122: in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg (asyncpg/protocol/protocol.c:42239)
        ???
    asyncpg/protocol/codecs/base.pyx:123: in asyncpg.protocol.protocol.Codec.encode (asyncpg/protocol/protocol.c:12276)
        ???
    asyncpg/protocol/codecs/base.pyx:90: in asyncpg.protocol.protocol.Codec.encode_range (asyncpg/protocol/protocol.c:11868)
        ???
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    >   ???
    E   TypeError: list, tuple or Range object expected (got type <class 'datetime.date'>)
    
    asyncpg/protocol/codecs/range.pyx:53: TypeError
    

    which seems odd to me: what I want is to test whether a single date is within a period...

    I tried the following code, passing a one-item-tuple instead:

        today = date.today()
        q = sa.select([users.c.id]) \
            .where(users.c.name == 'test') \
            .where(users.c.active.contains((today,)))
        r = await pg.fetchrow(q)
    

    that works, but that seems to convert the argument to a [today, +infinity] range, something different from what I'm seeking.

    What am I missing?

    Hacktoberfest 
    opened by lelit 8
  • `ConnectionTransactionContextManager` slowly drains the pool

    `ConnectionTransactionContextManager` slowly drains the pool

    We are using transaction as follows

    async with pool.transaction() as conn:
        ...
    

    and from time to time (in our use case it ranges from once a day to once a week) our service freezes. When we investigated the issue we noticed, that all connections in the pool were marked as used and no free connection was available. We traced the problem to the ConnectionTransactionContextManager.

    If cancellation or timeout raises during __aenter__ or __aexit__ there is no guarantee that connection is returned to the pool. And it slowly drains. We confirmed that this is the issue, because we change code to this

    async with pool.acquire() as conn:
        async with conn.transaction():
            ....
    

    and our problem stops.

    I can create PR to fix this if you like.

    opened by villlem 7
  • None result should be None (fetchrow and fetch)

    None result should be None (fetchrow and fetch)

    query = ...  # some SA select query
    row = await conn.fetchrow(query)  # conn is SAConnection
    

    When the underlying asyncpg connection returns None as the "empty" result, row should become None as well. Currently, row is not None but row.row is None. This would confuse the users who expect the API semantics to be same to asyncpg.

    The same applies to fetch and execute which returns a RecordGenerator because it does not check None when instantiating Record objects.

    Q: What's the purpose of having seem-to-be-redundant Record and RecordGenerator classes? They look like a no-op proxy to asyncpg's return objects.

    opened by achimnol 7
  • Fixes #6: Adds a debug param for printing queries

    Fixes #6: Adds a debug param for printing queries

    Fixes #6

    • Adds a debug param to various components to enable printing of some debug statements, currently only the query.
    • Makes the connection protected to maintain library standard

    I intentionally left off the params - I could see a case for someone wanting that at some point but there is often enough info in there you may not want shown.

    As a start it just prints to stdout, not sure if you would like to use the logger for users to customize or print to stderr instead. In either case, I can happily fix it.

    opened by mattrasband 6
  • Cannot compile DropTable and CreateTable queries

    Cannot compile DropTable and CreateTable queries

    Hello!

    I try to create table via this code:

    import asyncio
    
    from asyncpgsa import pg
    import sqlalchemy as sa
    from sqlalchemy.sql.ddl import CreateTable, DropTable
    
    users = sa.Table(
        'users', sa.MetaData(),
        sa.Column('id', sa.Integer, primary_key=True),
        sa.Column('name', sa.VARCHAR(255)),
    )
    
    
    async def main():
        await pg.init('postgresql://localhost/test')
        await pg.query(DropTable(users))
        await pg.query(CreateTable(users))
    
    
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    and got an error from asyncpgsa

    Traceback (most recent call last):
      File "/project/main.py", line 20, in <module>
        loop.run_until_complete(main())
      File "/project/.venv/python3.6/asyncio/base_events.py", line 467, in run_until_complete
        return future.result()
      File "/project/main.py", line 15, in main
        await pg.query(DropTable(users))
      File "/project/.venv/lib/python3.6/site-packages/asyncpgsa/pgsingleton.py", line 65, in query
        compiled_q, compiled_args = compile_query(query)
      File "/project/.venv/lib/python3.6/site-packages/asyncpgsa/connection.py", line 60, in compile_query
        compiled_params = sorted(compiled.params.items())
    AttributeError: 'NoneType' object has no attribute 'items'
    

    Same code using sqlalchemy works fine:

    import sqlalchemy as sa
    from sqlalchemy.sql.ddl import CreateTable, DropTable
    
    users = sa.Table(
        'users', sa.MetaData(),
        sa.Column('id', sa.Integer, primary_key=True),
        sa.Column('name', sa.VARCHAR(255)),
    )
    
    engine = sa.create_engine('postgresql://localhost/test')
    engine.execute(DropTable(users))
    engine.execute(CreateTable(users))
    

    Thanks!

    Hacktoberfest 
    opened by abogushov 5
  • `conn.execute(query)` is not working with params

    `conn.execute(query)` is not working with params

    From version 0.14.0 conn.execute() is not working.

    Simple example:

    async with app.pool.acquire() as conn:
        query = table.update(table.c.some_culumn=123).where(table.c.id=1)
        await conn.execute(query)
    

    Will go asyncpgsa/connection.py :

    
    async def execute(self, script, *args, **kwargs) -> str:
            # script, params = compile_query(script, dialect=self._dialect)
            result = await super().execute(script, *args, **kwargs)
            return RecordGenerator(result)
    
    

    And there *args will be empty tuple, hence when it would go to super().execute(script, *args, **kwargs) which is asyncpg.Connection.execute() :

    async def execute(self, query: str, *args, timeout: float=None) -> str:
        ...
        if not args:  # <- WILL ENTER HERE
                return await self._protocol.query(query, timeout)
    
         _, status, _ = await self._execute(query, args, 0, timeout, True)
        return status.decode()
    

    And if i will use version 0.13.0 everything will be alright, because i will pass *params and *args to asyncpg.Connection.execute() and then it will skip if not args clause in asyncpg.Connectuon.execute:

    async def execute(self, script, *args, **kwargs) -> str:
        script, params = compile_query(script, dialect=self._dialect)
        result = await self._connection.execute(script, *params, *args, **kwargs)
        return RecordGenerator(result)
    

    I am certainly grateful for that awesome package, and i am willing to help, but i not sure how, because recent change to omit compile_query in execute statement is made for a reason. And i don't really understand what reason it is.

    For a moment i rolled back to 0.13.0 and everything works as expected.

    opened by vishes-shell 5
  • Asyncpg record objects

    Asyncpg record objects

    This removes the Record and RecordGenerators in favor of the asyncpg Records. This means dot notation must be replaces with dict notation when accessing values, making this a breaking change.

    It also fixes an issue where the _dialect property on the SAConnection was not getting set, causing issues when trying to use the query builder with table definitions containing JSON types. This is also potentially breaking, as it enables automatic type coercion into a JSON string, even if you have already passed the structure through json.dumps.

    It also re-enables the use of compile_query for several of the SAConnection types, so that SA query-builder queries will get properly converted.

    It also proxies the cursor method of the parent connection, allowing SA queries to be used with cursors directly.

    opened by rlittlefield 4
  • support asyncpg 0.22.0

    support asyncpg 0.22.0

    Hi!

    The new release of asyncpg 0.22.0 breaks compatibility with asyncpgsa and here is a simple fix for that. Please review and let me know if I need to fix something.

    Thanks!

    opened by Gr1N 3
  • Connect Error When using dsn and password contains '#'

    Connect Error When using dsn and password contains '#'

    when i use pool and my db password is "dY8*6fN6Z#xSOg$wG9zDATTe" pool = await pg.create_pool(dsn) raise an error

    hostlist_ports.append(int(hostspec_port))
    ValueError: invalid literal for int() with base 10: 'dY8*6fN6Z'
    
    opened by wujunkui 3
  • Dependency on old version of asyncpg

    Dependency on old version of asyncpg

    When using asyncpgsa it's hard to use latest version of asyncpg, because there's asyncpg~=0.12.0 in install_requires. What's more, I'd like to use asyncpgsa only as sql query compiler, without using its context managers (as shown here). In this case asyncpg is not a asyncpgsa's dependency any more in practice. It's just a query compiler. How about moving asyncpg to extras_require? Or splitting the library into 2 separate packages (compiler and asyncpg adapter (any better name?)).

    opened by bitrut 3
  • Missing releases after 0.25 version on github

    Missing releases after 0.25 version on github

    Today caught issue with incompatibility asyncpgsa==0.26.1 with asyncpg==0.22.0 (but found that latest release works well!).

    So, i opened releases to see the diff between 0.26.1 and latest, but found the latest release described is 0.25. Perhaps, you would like to add other releases to make it easier to see the diffs?

    Of course it is possible to build diff against the commits, but releases seem to be little bit faster/easier (and it is bit confusing that you have only part of releases on the github).

    opened by alvassin 0
  • compile_query fails with sqlalchemy 1.4

    compile_query fails with sqlalchemy 1.4

    This works fine with asyncpgsa version 0.27.1 and sqlalchemy: 1.3.23, but fails when I upgrade to sqlalchemy 1.4.4.

    import asyncpgsa
    import sqlalchemy as sa
    
    Thing = sa.Table("thing", sa.MetaData(), sa.Column("name", sa.Text))
    query = Thing.insert().values(name="name").returning(Thing)
    asyncpgsa.compile_query(query)
    
    

    This prevents me from running any code that uses an INSERT statement.

    The error seems to be here:

    https://github.com/CanopyTax/asyncpgsa/blob/master/asyncpgsa/connection.py#L36

        if isinstance(query.parameters, list):
    

    because sqlalchemy.sql.dml.Insert no longer has a parameters attribute.

    For now I'm dealing with this by pinning to an earlier sqlalchemy version. It's not clear to me what the fix should be, because the sqlalchemy internals seem to have changed considerably between versions.

    opened by trvrmcs 4
  • the right way to autoload/reflect

    the right way to autoload/reflect

    Please teach how to do Table(autoload=True) or metadata.reflect() with asyncpgsa properly.

    Tried passing reflect(bind=asyncpgsa_connection) and it's AttributeError game: needs connect then dialect, I feel like it's going in wrong direction.

    People use psycopg2 to create sqlalchemy engine and then async driver for normal work. That may work if reflection doesn't use driver specifics. I'll use that crutch if nothing else works.

    Can you create sqlalchemy Engine with asyncpgsa?

    opened by temoto 7
  • process_result_value callback for column type is not handled

    process_result_value callback for column type is not handled

    I need TypeDecorator for my data column, it is handled by SQLAlchemy + psycopg2 correctly. Is it possible to make it work with asyncpgsa?

    import asyncio
    from datetime import datetime
    
    from asyncpgsa import PG
    from pytz import timezone
    from sqlalchemy import (
        Column, DateTime, Integer, MetaData, Table, TypeDecorator, create_engine
    )
    
    
    DB_URL = 'postgresql://user:[email protected]/db'
    
    
    class DateTime_(TypeDecorator):
        impl = DateTime
    
        def __init__(self):
            TypeDecorator.__init__(self, timezone=True)
    
        def process_bind_param(self, value, dialect):
            if value is not None:
                return datetime.fromtimestamp(value, timezone('UTC'))
    
        def process_result_value(self, value, dialect):
            return int(value.timestamp())
    
    
    metadata = MetaData()
    example_table = Table('example', metadata,
                          Column('id', Integer, primary_key=True),
                          Column('some_date', DateTime_))
    
    engine = create_engine(DB_URL)
    
    # Create table & add row
    metadata.create_all(engine)
    engine.execute(example_table.insert().values({
        'some_date': int(datetime.now().timestamp())
    }))
    
    # psycopg2 with sqlalchemy handles process_result_value correctly
    rows = engine.execute(example_table.select()).fetchall()
    assert isinstance(rows[0]['some_date'], int)
    
    
    # asyncpgsa does not handle process_result_value callback
    async def main():
        db = PG()
        await db.init(DB_URL)
        rows = await db.fetch(example_table.select())
        assert isinstance(rows[0]['some_date'], datetime)  # True
        assert isinstance(rows[0]['some_date'], int)  # False!
    
    
    asyncio.run(main())
    

    Perhaps such callbacks should be called in SAConnection.execute?

    opened by alvassin 1
  • bindparam does not work

    bindparam does not work

    Perhaps you could advice some fast fix or workaround for that? I need update for many different rows with different values. Previously used bindparam for that.

    import asyncio
    
    from asyncpgsa import PG
    from sqlalchemy import Table, MetaData, Column, Integer, String, bindparam, \
        create_engine
    
    metadata = MetaData()
    
    table = Table(
        'citizens',
        metadata,
        Column('test_id', Integer, primary_key=True),
        Column('name', String, nullable=False),
    )
    
    DB_URL = 'postgresql://user:[email protected]/db'
    
    
    async def main():
        # create table
        engine = create_engine(DB_URL)
        metadata.create_all(engine)
    
        # connect to db
        pg = PG()
        await pg.init(DB_URL)
        async with pg.transaction() as conn:
            # create
            query = table.insert().values([
                {'name': str(i)} for i in range(10)
            ]).returning(table)
            rows = await conn.fetch(query)
    
            # update
            query = table.update().values(name=bindparam('name'))
            await conn.execute(query, [
                {'test_id': row['test_id'], 'name': row['name'] + '_new'}
                for row in rows
            ])
    
            # check
            # asyncpg.exceptions.NotNullViolationError: null value in column "name" violates not-null constraint
            # DETAIL:  Failing row contains (31, null).
            results = await conn.execute(table.select())
            print(results)
    
    asyncio.run(main())
    
    opened by alvassin 4
Releases(0.25.0)
  • 0.25.0(Feb 11, 2019)

  • 0.24.0(Jun 30, 2018)

  • 0.23.0(May 16, 2018)

  • 0.21.0(Mar 14, 2018)

  • 0.20.0(Mar 12, 2018)

  • 0.19.2(Feb 13, 2018)

  • 0.19.1(Feb 13, 2018)

    using with with an async context now raises RuntimeError instead of SyntaxError to be consistant with other aio libs. This is technically not backwards compatible, but since no one should be relying on this behavior, it is not being considered a minor release.

    Source code(tar.gz)
    Source code(zip)
  • 0.19.0(Feb 13, 2018)

    This is mostly a bug fix update, but it also includes bumping asyncpg to 0.14 which could potentially break things.

    Other than fixing documentation, there are only two changes from 0.18.2

    • Bump asyncpg to 0.14
    • Check default value if its callable in sqlalchemy parsing (see #73). Thanks to @kamikaze
    Source code(tar.gz)
    Source code(zip)
  • 0.18.2(Dec 7, 2017)

  • 0.18.0(Oct 3, 2017)

    This version will break everything. It removes the record proxy object, now returning the object returned by asyncpg, meaning that accessing columns is now dictionary bracket notation, and not dot notation. It also removes the insert function, and adds automatic json parsing.

    record.my_column becomes record['my_column']

    Full list of changes:

    • Dropped asyncpgsa's Record and RecordGenerator in favor of asyncpg's Records and lists, causing dot notation to be replaced with dict notation when accessing properties (row['id'] instead of row.id)
    • connection cursor function now uses compile_query so it can handle the same query objects as the other functions like fetchval
    • removed the insert function from the SAConnection. SA query objects will need to use query.returning(sa.text('*'))) or the like to get the values you want explicitly, and all inserts will have to move to one of the other methods like fetchval. Plain text queries will need to add ' RETURNING id ' or something similar to the query itself instead of relying it it being added by SAConnection. It should be noted that sqlalchemy does this for you as long as your table definition has a primary key.
    • The postgres SA dialact is loaded into the SAConnection class now. This will cause breaking changes when using behaviors that differ based on dialact, such as using JSON column types in SA table definitions. In that case, it will actuall json dumps and loads automatically for you, which will break if you did it manually in your own code.
    Source code(tar.gz)
    Source code(zip)
  • 0.17.0(Aug 29, 2017)

    Found a way to fix an issue with the tests without requiring dynamic subclassing. This fixes the regression in 0.16.0, so SAConnection is now reference-able again.

    Source code(tar.gz)
    Source code(zip)
  • 0.16.0(Aug 28, 2017)

    This Version fixes the testing framework, which was broken in 0.14.x.

    Your tests might not be completely backwards compatible. If that is the case, please file an issue so I can make sure we handle all cases.

    Major change: This also breaks asyncpgsa.connection.SAConnection and so any direct references to that class will fail. If you have a use-case for referencing it. Please let me know.

    Source code(tar.gz)
    Source code(zip)
  • 0.15.0(Aug 14, 2017)

  • 0.14.2(Aug 10, 2017)

  • 0.14.1(Jul 21, 2017)

    Changes from 0.14.0:

    • bugfix, pg.execute() now maintains args when passing a string. See https://github.com/CanopyTax/asyncpgsa/pull/39. Shoutout to @fantix for the fix.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Apr 21, 2017)

    changes

    1. Changed the dialect from psycopg2 to pypostgres. This should be mostly backwards compatible, but if you notice weird issues, this is why.
    2. You can now plug-in your own dialect using pg.init(..., dialect=my_dialect), or setting the dialect on the pool. See the top of the connection file for an example of creating a dialect. Please let me know if the change from psycopg2 to pypostgres broke you. If this happens enough, I might make psycopg2 the default.
    Source code(tar.gz)
    Source code(zip)
Owner
Canopy
Canopy
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 27, 2022
A simple wrapper to make a flat file drop in raplacement for mongodb out of TinyDB

Purpose A simple wrapper to make a drop in replacement for mongodb out of tinydb. This module is an attempt to add an interface familiar to those curr

null 180 Jan 1, 2023
A wrapper for SQLite and MySQL, Most of the queries wrapped into commands for ease.

Before you proceed, make sure you know Some real SQL, before looking at the code, otherwise you probably won't understand anything. Installation pip i

Refined 4 Jul 30, 2022
Pystackql - Python wrapper for StackQL

pystackql - Python Library for StackQL Python wrapper for StackQL Usage from pys

StackQL Studios 6 Jul 1, 2022
Python Wrapper For sqlite3 and aiosqlite

Python Wrapper For sqlite3 and aiosqlite

null 6 May 30, 2022
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Jan 2, 2023
Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API

Apache Libcloud - a unified interface for the cloud Apache Libcloud is a Python library which hides differences between different cloud provider APIs

The Apache Software Foundation 1.9k Dec 25, 2022
A library for python made by me,to make the use of MySQL easier and more pythonic

my_ezql A library for python made by me,to make the use of MySQL easier and more pythonic This library was made by Tony Hasson , a 25 year old student

null 3 Nov 19, 2021
Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Databank Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment. No ORM, no frills. Thread-safe. Only ra

snapADDY GmbH 4 Apr 4, 2022
Little wrapper around asyncpg for specific experience.

Little wrapper around asyncpg for specific experience.

Nikita Sivakov 3 Nov 15, 2021
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
A thing to simplify listening for PG notifications with asyncpg

asyncpg-listen This library simplifies usage of listen/notify with asyncpg: Handles loss of a connection Simplifies notifications processing from mult

ANNA 18 Dec 23, 2022
PEP-484 typing stubs for SQLAlchemy 1.4 and SQLAlchemy 2.0

SQLAlchemy 2 Stubs These are PEP-484 typing stubs for SQLAlchemy 1.4 and 2.0. They are released concurrently along with a Mypy extension which is desi

SQLAlchemy 139 Dec 30, 2022
Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

sqlalchemy-databricks A SQLAlchemy Dialect for Databricks using the officially s

Flynn 19 Nov 3, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
Thin-wrapper around the mock package for easier use with pytest

pytest-mock This plugin provides a mocker fixture which is a thin-wrapper around the patching API provided by the mock package: import os class UnixF

pytest-dev 1.5k Jan 5, 2023
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 2, 2023
A friendly wrapper for modern SQLAlchemy and Alembic

A friendly wrapper for modern SQLAlchemy (v1.4 or later) and Alembic. Documentation: https://jpsca.github.io/sqla-wrapper/ Includes: A SQLAlchemy wrap

Juan-Pablo Scaletti 129 Nov 28, 2022
A Python wrapper around the libmemcached interface from TangentOrg.

pylibmc is a Python client for memcached written in C. See the documentation at sendapatch.se/projects/pylibmc/ for more information. New in version 1

Ludvig Ericson 458 Dec 30, 2022
A python wrapper around the ZPar parser for English.

NOTE This project is no longer under active development since there are now really nice pure Python parsers such as Stanza and Spacy. The repository w

ETS 49 Sep 12, 2022