Piccolo - A fast, user friendly ORM and query builder which supports asyncio.



Tests Release Documentation Status PyPI Language grade: Python Total alerts codecov

A fast, user friendly ORM and query builder which supports asyncio. Read the docs.


Some of it’s stand out features are:

  • Support for sync and async.
  • A builtin playground, which makes learning a breeze.
  • Tab completion support - works great with iPython and VSCode.
  • Batteries included - a User model, authentication, migrations, an admin GUI, and more.
  • Modern Python - fully type annotated.


The syntax is clean and expressive.

You can use it as a query builder:

# Select:
await Band.select(
    Band.popularity > 100

# Join:
await Band.select(

# Delete:
await Band.delete().where(
    Band.popularity < 1000

# Update:
await Band.update({Band.popularity: 10000}).where(
    Band.name == 'Pythonistas'

Or like a typical ORM:

# To create a new object:
b = Band(name='C-Sharps', popularity=100)
await b.save().run()

# To fetch an object from the database, and update it:
b = await Band.objects().where(Band.name == 'Pythonistas').first().run()
b.popularity = 10000
await b.save().run()

# To delete:
await b.remove().run()


Installing with PostgreSQL driver:

pip install piccolo[postgres]

Installing with SQLite driver:

pip install piccolo[sqlite]

Building a web app?

Let Piccolo scaffold you an ASGI web app, using Piccolo as the ORM:

piccolo asgi new

Starlette, FastAPI, and BlackSheep are currently supported.

Are you a Django user?

We have a handy page which shows the equivalent of common Django queries in Piccolo.


See Read the docs.

  • First class Cockroach DB support.

    First class Cockroach DB support.

    Friction-free Cockroach DB support in Piccolo ORM.

    PR: https://github.com/piccolo-orm/piccolo/pull/608

    We'd like to see Cockroach Labs promote Piccolo as one of, if not the recommended async ORM. asyncpg has some support built-in to help us get there.

    In the mean time I will be adding notes on how to use CockroachDB with Piccolo, and general progress.

    Quick local setup for testing

    • Download/extract (single binary): https://www.cockroachlabs.com/docs/stable/install-cockroachdb-linux.html
    • Start Cockroach: ./cockroach start-single-node --insecure --store=node1 --listen-addr=localhost:26257 --http-addr=localhost:8080 --background
    • Enter SQL client: ./cockroach sql --insecure
      • Can create database ...; use ...; from here.
    • CTRL-C or killall cockroach to send graceful termination signal.

    Default account is root without password. Fancy stats dashboard at http://localhost:8080

    Both Piccolo Examples and Piccolo Admin are working.


    What we still need

    • [x] ~~Get build/test suite running on Github Actions for Piccolo QA tooling.~~
    • [x] ~~Some way to identify we're running under Cockroach DB.~~
      • ~~Added CockroachDB Engine.~~
    • [x] ~~Get test suite mostly / fully passing.~~
    • [x] ~~Basic documentation~~
    • [x] ~~Time Travel queries using AS OF SYSTEM TIME~~
      • ~~Some way to do Time Travel SELECT queries (SELECT ... AS OF SYSTEM TIME)~~
      • ~~https://www.cockroachlabs.com/docs/stable/as-of-system-time.html~~
      • :heavy_check_mark: See: https://github.com/piccolo-orm/piccolo/issues/607#issuecomment-1251330590
    • [x] Sharding for Sequential Indexes
      • :warning: Done, but will go into its own PR after this one: https://github.com/piccolo-orm/piccolo/issues/622
      • Some way to do "Hash Sharded" Indexes (INDEX ... USING HASH)
      • https://www.cockroachlabs.com/docs/stable/hash-sharded-indexes.html

    :bug: Known issues in CockroachDB or Asyncpg

    [ ] ARRAY support between asyncpg ↔️ cockroach.

    • Currently affected by: https://github.com/cockroachdb/cockroach/issues/71908
    • See: piccolo/tests/columns/test_array.py
    • M2M works, but as_list=True does not because it uses ARRAY

    cannot drop UNIQUE constraint "..." using ALTER TABLE DROP CONSTRAINT, use DROP INDEX CASCADE instead

    • Tracked: https://github.com/cockroachdb/cockroach/issues/42840
    • Affecting test_alter_column_unique() in test_migrations.py

    column ... is of type int[] and thus is not indexable / column ... is of type varchar[] and thus is not indexable

    • Tracked: https://github.com/cockroachdb/cockroach/issues/35730
    • Affecting both test_array_column_integer() and test_array_column_varchar in test_migration_manager.py

    :moon: Future nice to haves

    CockroachDB executes ALTER COLUMN queries asynchronously.

    • In the short term this can be mitigated by keeping migrations small.
    • At odds with Piccolo's assumption that the database is altered before the next migration operation begins. CockroachDB will give a warning: table <...> is currently undergoing a schema change if a later operation tries to modify the table before the asynchronous query finishes. CockroachDB may fix this in the future.
    opened by gnat 74
  • support-custom-field-primary-key


    Fixes #107 #32 and maybe #63 Here's my approach to support UUID priamry key:

    • Removed PrimaryKey column as it only supported Integer value types
    • Didn't define id for Table by default and checked if no PrimaryKey was defined for table then Serial PK would be added.
    • I think Serial column is Postgres specific, maybe change it to something like Django AutoField?

    This works fine in Postgres, but in sqlite after insert we're returning cursor.lastrowid which does not return our id but the special virtual column ROWID from sqlite. I could come up with two approaches:

    • Sqlite has added RETURNING clauses so we can do some sane INSERT query the same way as Postgres. But we will only support Sqlite 3.35 forwards: here
    • Or we have to get the cursor.lastrowid and do another query to get our id based on ROWID like this SELECT id FROM table WHERE ROWID = <cursor.lastrowid>. I wanted to get some feedback on this.

    Any of these should work now:

    class MyTableDefaultPrimaryKey(Table):
        name = Varchar()
    class MyTablePrimaryKeyInteger(Table):
        id = Integer(null=False, primary=True, key=True)
        name = Varchar()
    class MyTablePrimaryKeyUUID(Table):
        id = UUID(null=False, primary=True, key=True)
        name = Varchar()
    opened by aminalaee 38
  • sqlite timeout handling

    sqlite timeout handling

    Hi, I often get sqlite3.OperationalError: database is locked, what would be the best way to evade this?

    I thought about increasing the lock timeout, but am not sure how to correctly do this when using this library.

    Here https://stackoverflow.com/a/39265148 the pragma-statement PRAGMA journal_mode=WAL is recommended. How would I use this with piccolo? I know I can execute raw sql, but doesn't this need to be re-executed for each connection?

    Thank you for your help!

    opened by powellnorma 24
  • Schema generation enhancements

    Schema generation enhancements

    Piccolo now has a command for generating a schema from an existing database.

    piccolo schema generate

    The current implementation doesn't cover every possible edge case and database feature, so there's room for improvement. For example:

    Column defaults

    Getting the defaults for a column, and reflecting them in the Piccolo column. For example, if the column has a default of 1 in the database, we want that to be reflected in the Piccolo schema as:

    class MyTable(Table):
        my_column = Integer(default=1)

    On Delete / On Update

    The database values aren't currently reflected in the ForeignKey column definitions.

    class MyTable(Table):
        my_column = ForeignKey(on_delete=OnDelete.cascade)

    Decimal precision

    The precision of a decimal column in the database currently isn't reflected in the Decimal column definitions.

    class MyTable(Table):
        my_column = Decimal(digits=(5,2))

    It's a fairly beginner friendly feature to work on, because even though the code is fairly advanced, it's completely self contained, and doesn't require extensive knowledge of the rest of Piccolo.

    enhancement good first issue 
    opened by dantownsend 24
  • add callbacks to queries

    add callbacks to queries

    resolves #344

    here's my initial work on adding callbacks to queries as per #344

    i would appreciate a code review before i take this any further! i also have a couple of questions:

    1. at this stage you only get a callback when success = True, where & how would you suggest we detect a query failing and invoke the callback with success = False?
    2. in piccolo/query/base.py, at the end of the run function, it branches based on whether there's more than one querystring or not. i was unable to cover the second branch with a test as i couldn't figure out how you'd end up with more than one querystring. would appreciate some pointers on how that bit works.

    i look forward to hearing your thoughts.

    opened by backwardspy 21
  • Roadmap


    • [x] JSON fields
    • [ ] More aggregation functions - e.g. SUM
    • [x] Aliases in SELECT queries (e.g. SELECT foo AS bar FROM my_table)
    • [ ] Improved support for data migrations
    • [x] Fixtures
    • [x] Schema visualisation tool
    • [x] piccolo sql_shell new command - execute SQL directly in a shell, via the configured engine
    • [x] piccolo shell new command - like the playground, but for your own tables
    • [x] Shorter aliases for commands in CLI, or tab completion
    • [x] Improved documentation for how to test Piccolo apps
    • [ ] Nested transactions
    • [x] Move Pydantic integration from Piccolo API into Piccolo
    • [x] Allow the ForeignKey references argument to accept a string such as 'my_app.SomeTable' when circular imports are an issue.
    • [ ] Support Postgres Enum types.
    • [ ] Row level constraints.
    • [ ] Making foreign key constraints optional on ForeignKey columns.
    • [x] Allow UUIDs as primary keys?
    • [ ] Subqueries
    opened by dantownsend 15
  • add-scripts-folder


    I think it'd be clear to move all the scripts to the scripts folder following this. I've moved all existing scripts and updated the github workflow. But here some questions/ideas:

    • Should we move piccolo.sh from root of project too?
    • Coverage is not enforced, should we add a minimum coverage so pipeline fails on missing coverage using something like Codecov?
    • release.sh can be a workflow in github to upload to PyPi when github release is created.
    opened by aminalaee 14
  • postgres camelCase columns

    postgres camelCase columns

    Hello again, I tried out PostgresEngine but noticed that using camelCase column names leads to problems. Is this something the ORM could potentially "abstract away", or would that be a bad idea?

    opened by powellnorma 13
  • M2M prototype

    M2M prototype

    An initial prototype for M2M (many to many). It still needs a bunch of testing, but it's a start.

    class Band(Table):
        name = Varchar()
        genres = M2M(
            LazyTableReference("GenreToBand", module_path=__name__)
    class Genre(Table):
        name = Varchar()
    # The joining table:
    class GenreToBand(Table):
        band = ForeignKey(Band)
        genre = ForeignKey(Genre)

    We have a new class called M2M. You reference a joining table. By explicitly defining the joining table it has the following benefits:

    • Simplifies migrations, as we don't have to magically create new tables.
    • Makes it easier to retrofit Piccolo onto an existing schema.
    • Can add additional columns to the joining table if we want (see below).
    class GenreToBand(Table):
        band = ForeignKey(Band)
        genre = ForeignKey(Genre)
        created_on = Timestamp()
        reasoning = Text()

    With M2M you can then do queries like this:

    >>> await Band.select(Band.name, Band.genres(Genre.name))
        {"name": "Pythonistas", "genres": ["Rock", "Folk"]},
        {"name": "Rustaceans", "genres": ["Folk"]},
        {"name": "C-Sharps", "genres": ["Rock", "Classical"]},
    opened by dantownsend 13
  • BlockingIOError: [Errno 35] write could not complete without blocking

    BlockingIOError: [Errno 35] write could not complete without blocking

    From piccolo shell run the commands




    work fine, but...


    displays the result and then raise BlockingIOError exception:

    ~/.venv/lib/python3.9/site-packages/IPython/core/displayhook.py in write_format_data(self, format_dict, md_dict)
        190         try:
    --> 191             print(result_repr)
        192         except UnicodeEncodeError:
        193             # If a character is not supported by the terminal encoding replace
    BlockingIOError: [Errno 35] write could not complete without blocking---------------------------------------------------------------------------
    BlockingIOError                           Traceback (most recent call last)

    I don't know what's going on.

    opened by hipertracker 13
  • get_output_schema exception with table name

    get_output_schema exception with table name

    get_output_schema will raise an exception with the name of the corresponding table if something goes wrong while generating the tables. based on #447

    Another alternative would be to add the errors attribute to the OutputSchema class and handle it like how warnings are being addressed.

    opened by AliSayyah 12
  • Best practices for writing unit tests

    Best practices for writing unit tests

    In django each unit test is wrapped in a database transaction. It means that any changes made during test (e.g. calling an endpoint which adds a new object) are rolled back after the test ends.

    This is very useful because the state of the DB does not change between tests, and assertions are easy to write and very easy to understand (like: I call an endpoint to create an object, then I check that there is exactly 1 object in the database).

    Is there a way of achieving the same transactionality in piccolo and FastAPI ? Are there any examples?

    opened by PawelRoman 3
  • make Readable for boolean fields more user-friendly

    make Readable for boolean fields more user-friendly

    As far as I can see, if using Readable and get_readable, boolean fields are represented as f and t (False and True). It would be good if it was possible to represent these in a more user-friendly way - Maybe even make the default similar to Python's string representation of booleans:

    In [1]: str(True)
    Out[1]: 'True'
    In [2]: str(False)
    Out[2]: 'False'
    opened by trondhindenes 5
  • [Enhancement] use Typer and questionary for the CLI

    [Enhancement] use Typer and questionary for the CLI


    below I created a mock application with typer to illustrate my point of view:

    • better information display
    Screenshot 2022-12-27 214307 Screenshot 2022-12-27 214336
    • better error display :
    Screenshot 2022-12-27 214423 Screenshot 2022-12-27 214442
    • type checking everywhere
    • modularity
    • shell completion
    • all the powerful stuff that rich provides ( i.e tables, spinners, progress bars,...)

    using questionary provides a much more intuitive user experience:

    Screenshot 2022-12-27 215253 Screenshot 2022-12-27 215905

    If any of the maintainers finds this interesting, you can assign it to me and I will work on it 🙂

    opened by kareemmahlees 3
  • [Question] Need help with table joins

    [Question] Need help with table joins

    I have following tables.

    class Status(Table):
        code = SmallInt()
        name = Varchar()
    class Report(Table):
        id = UUID(primary_key=True)
        form = Varchar()
        # columns
    class Message(Table):
        id = UUID(primary_key=True)
        report_id = ForeignKey(references=Report)
        status = ForeignKey(references=Status, target_column=Status.code)
        # columns
    class Client(Table):
        report_id = ForeignKey(references=Report)
        type = SmallInt()
        # columns

    I want to make this query.

    SELECT message.id, message.report_id, report.form, status.name, client.type FROM message LEFT JOIN report ON report.id = message.report_id LEFT JOIN status ON status.code = message.status LEFT JOIN client ON client.report_id = message.report_id WHERE message.status != 0

    This variant returns a rather obvious error.

    result = await Message.select(
        Message.report_id.client.type  # ?? AttributeError: 'ForeignKey' object has no attribute 'client'
        Message.status.code != 0

    How can I join a table with client details?

    opened by WoolenSweater 8
  • pydantic error none.not_allowed on select().output(nested=True)

    pydantic error none.not_allowed on select().output(nested=True)

    Hello! I've using piccolo with fastapi and pydantic. I need to select() db records as dicts and throw them to pydantic, but it fails on ForeignKey because piccolo returns nested dicts with nested=True param as dicts with filled keys and None values

    {'id': 1, 'one': {'desc': None, 'id': None, 'name': None, 'number': None}}

    instead of just

    {'id': 1, 'one': None}

    import asyncio
    from typing import Optional
    import piccolo.columns as cl
    from piccolo.engine.sqlite import SQLiteEngine
    from piccolo.table import Table
    from pydantic import BaseModel
    DB = SQLiteEngine()
    class TableOne(Table, db=DB):
        name = cl.Varchar()
        desc = cl.Varchar()
        number = cl.Integer()
    class TableTwo(Table, db=DB):
        one = cl.ForeignKey(TableOne)
    class TableOneModel(BaseModel):
        id: int
        name: str
        desc: str
        number: int
    class TableTwoModel(BaseModel):
        id: int
        one: Optional[TableOneModel] = None
    async def main():
        await TableOne.alter().drop_table(if_exists=True)
        await TableOne.create_table()
        await TableTwo.alter().drop_table(if_exists=True)
        await TableTwo.create_table()
        await TableTwo.objects().create()
        result = await TableTwo.select(
            TableTwo.all_columns(), TableTwo.one.all_columns()
        # fails here
    if __name__ == '__main__':
    opened by metakot 7
  • 0.104.0(Jan 2, 2023)

    Major improvements to Piccolo's typing / auto completion support.

    >>> bands = await Band.objects()  # List[Band]
    >>> band = await Band.objects().first()  # Optional[Band]
    >>> bands = await Band.select().output(as_json=True)  # str

    Happy New Year!

    Source code(tar.gz)
    Source code(zip)
  • 0.103.0(Dec 22, 2022)


    This allows you to access features in the database which aren't exposed directly by Piccolo. For example, Postgres functions:

    from piccolo.query import SelectRaw
    >>> await Band.select(
    ...     Band.name,
    ...     SelectRaw("log(popularity) AS log_popularity")
    ... )
    [{'name': 'Pythonistas', 'log_popularity': 3.0}]

    Large fixtures

    Piccolo can now load large fixtures using piccolo fixtures load. The rows are inserted in batches, so the database adapter doesn't raise any errors. Thanks to @lgblkb for reporting this.

    Source code(tar.gz)
    Source code(zip)
  • 0.102.0(Dec 17, 2022)

    Migration file names

    The naming convention for migrations has changed slightly. It used to be just a timestamp - for example:


    By convention Python files should start with a letter, and only contain a-z, 0-9 and _, so the new format is:


    Note: You can name a migration file anything you want (it's the ID value inside it which is important), so this change doesn't break anything.

    Enhanced Pydantic configuration

    We now expose all of Pydantic's configuration options to create_pydantic_model:

    class MyPydanticConfig(pydantic.BaseConfig):
        extra = 'forbid'
    model = create_pydantic_model(

    Thanks to @waldner for this.

    Other changes

    • Fixed a bug with get_or_create and null columns (thanks to @powellnorma for reporting this issue).
    • Updated the Starlite ASGI template, so it uses the latest syntax for mounting Piccolo Admin (thanks to @sinisaos for this, and the Starlite team).
    Source code(tar.gz)
    Source code(zip)
  • 0.101.0(Dec 8, 2022)

  • 0.100.0(Dec 7, 2022)

    Array columns now support choices.

    class Ticket(Table):
        class Extras(str, enum.Enum):
            drink = "drink"
            snack = "snack"
            program = "program"
        extras = Array(Varchar(), choices=Extras)

    We can then use the Enum in our queries:

    >>> await Ticket.insert(
    ...     Ticket(extras=[Extras.drink, Extras.snack]),
    ...     Ticket(extras=[Extras.program]),
    ... )

    This will also be supported in Piccolo Admin in the next release.

    206292810-23b47d29-ff7f-4d04-8c70-fd5c8d790629 Source code(tar.gz)
    Source code(zip)
  • 0.99.0(Dec 2, 2022)

    You can now use the returning clause with delete queries.

    For example:

    >>> await Band.delete().where(Band.popularity < 100).returning(Band.name)
    [{'name': 'Terrible Band'}, {'name': 'Awful Band'}]

    This also means you can count the number of deleted rows:

    >>> len(await Band.delete().where(Band.popularity < 100).returning(Band.id))

    Thanks to @waldner for adding this feature.

    Source code(tar.gz)
    Source code(zip)
  • 0.98.0(Dec 2, 2022)

    SQLite TransactionType

    You can now specify the transaction type for SQLite.

    This is useful when using SQLite in production, as it's possible to get database locked errors if you're running lots of transactions concurrently, and don't use the correct transaction type.

    In this example we use an IMMEDIATE transaction:

    from piccolo.engine.sqlite import TransactionType
    async with Band._meta.db.transaction(
        band = await Band.objects().get_or_create(Band.name == 'Pythonistas')

    We've added a new tutorial which explains this in more detail, as well as other tips for using asyncio and SQLite together effectively.

    Thanks to @powellnorma and @sinisaos for their help with this.

    Other changes

    • Fixed a bug with camelCase column names (we recommend using snake_case, but sometimes it's unavoidable when using Piccolo with an existing schema). Thanks to @sinisaos for this.
    • Fixed a typo in the docs with raw queries - thanks to @StitiFatah for this.
    Source code(tar.gz)
    Source code(zip)
  • 0.97.0(Nov 22, 2022)

    Some big improvements to order_by clauses.

    It's now possible to combine ascending and descending:

    await Band.select(

    You can also order by anything you want using OrderByRaw:

    from piccolo.query import OrderByRaw
    await Band.select(
    Source code(tar.gz)
    Source code(zip)
  • 0.96.0(Nov 9, 2022)

    Added the auto_update argument to Column. Its main use case is columns like modified_on where we want the value to be updated automatically each time the row is saved.

    class Band(Table):
        name = Varchar()
        popularity = Integer()
        modified_on = Timestamp(
    # The `modified_on` column will automatically be updated to the current
    # timestamp:
    >>> await Band.update({
    ...     Band.popularity: Band.popularity + 100
    ... }).where(
    ...     Band.name == 'Pythonistas'
    ... )

    It works with MyTable.update and also when using the save method on an existing row.

    Source code(tar.gz)
    Source code(zip)
  • 0.95.0(Oct 28, 2022)

    Made improvements to the Piccolo playground.

    • Syntax highlighting is now enabled.
    • The example queries are now async (iPython supports top level await, so this works fine).
    • You can optionally use your own iPython configuration piccolo playground run --ipython_profile (for example if you want a specific colour scheme, rather than the one we use by default).

    Thanks to @haffi96 for this. See PR 656.

    Source code(tar.gz)
    Source code(zip)
  • 0.94.0(Oct 23, 2022)

    Fixed a bug with MyTable.objects().create() and columns which are not nullable. Thanks to @metakot for reporting this issue.

    We used to use logging.getLogger(__file__), but as @Drapersniper pointed out, the Python docs recommend logging.getLogger(__name__), so it has been changed.

    Source code(tar.gz)
    Source code(zip)
  • 0.93.0(Oct 21, 2022)

    • Fixed a bug with nullable JSON / JSONB columns and create_pydantic_model - thanks to @eneacosta for this fix.
    • Made the Time column type importable from piccolo.columns.
    • Python 3.11 is now supported.
    • Postgres 9.6 is no longer officially supported, as it's end of life, but Piccolo should continue to work with it just fine for now.
    • Improved docs for transactions, added docs for the as_of clause in CockroachDB (thanks to @gnat for this), and added docs for add_raw_backwards.
    Source code(tar.gz)
    Source code(zip)
  • 0.92.0(Oct 11, 2022)

    Added initial support for Cockroachdb (thanks to @gnat for this massive contribution).

    Fixed Pylance warnings (thanks to @MiguelGuthridge for this).

    Source code(tar.gz)
    Source code(zip)
  • 0.91.0(Sep 24, 2022)

    Added support for Starlite, another great ASGI framework, like BlackSheep, Starlette, and FastAPI.

    If you use piccolo asgi new you'll see it as an option when selecting a router.

    Thanks to @sinisaos for adding this, and @peterschutt for helping debug ASGI mounting.

    Screenshot 2022-09-24 at 17 02 51 Source code(tar.gz)
    Source code(zip)
  • 0.90.0(Sep 7, 2022)

    Fixed an edge case, where a migration could fail if:

    • 5 or more tables were being created at once.
    • They all contained foreign keys to each other, as shown below.
    class TableA(Table):
    class TableB(Table):
        fk = ForeignKey(TableA)
    class TableC(Table):
        fk = ForeignKey(TableB)
    class TableD(Table):
        fk = ForeignKey(TableC)
    class TableE(Table):
        fk = ForeignKey(TableD)

    Thanks to @sumitsharansatsangi for reporting this issue.

    Source code(tar.gz)
    Source code(zip)
  • 0.89.0(Sep 1, 2022)

    Made it easier to access the Email columns on a table.

    >>> MyTable._meta.email_columns
    [MyTable.email_column_1, MyTable.email_column_2]

    This was added for Piccolo Admin.

    Source code(tar.gz)
    Source code(zip)
  • 0.88.0(Aug 27, 2022)

    Fixed a bug with migrations - when using db_column_name it wasn't being used in some alter statements. Thanks to @theelderbeever for reporting this issue.

    class Concert(Table):
        # We use `db_column_name` when the column name is problematic - e.g. if
        # it clashes with a Python keyword.
        in_ = Varchar(db_column_name='in')
    Source code(tar.gz)
    Source code(zip)
  • 0.87.0(Aug 25, 2022)

    When using get_or_create with prefetch the behaviour was inconsistent - it worked as expected when the row already existed, but prefetch wasn't working if the row was being created. This now works as expected:

    >>> band = Band.objects(Band.manager).get_or_create(
    ...     (Band.name == "New Band 2") & (Band.manager == 1)
    ... )
    >>> band.manager
    <Manager: 1>
    >>> band.manager.name
    "Mr Manager"

    Thanks to @backwardspy for reporting this issue.

    Source code(tar.gz)
    Source code(zip)
  • 0.86.0(Aug 20, 2022)

    Added the Email column type. It's basically identical to Varchar, except that when we use create_pydantic_model we add email validation to the generated Pydantic model.

    from piccolo.columns.column_types import Email
    from piccolo.table import Table
    from piccolo.utils.pydantic import create_pydantic_model
    class MyTable(Table):
        email = Email()
    model = create_pydantic_model(MyTable)
    model(email="not a valid email")  # ValidationError!

    Thanks to @sinisaos for implementing this feature.

    Source code(tar.gz)
    Source code(zip)
  • 0.85.1(Aug 18, 2022)

  • 0.85.0(Aug 18, 2022)

    You can now append items to an array in an update query:

    await Ticket.update({
        Ticket.seat_numbers: Ticket.seat_numbers + [1000]
    }).where(Ticket.id == 1)

    Currently Postgres only. Thanks to @sumitsharansatsangi for suggesting this feature.

    Source code(tar.gz)
    Source code(zip)
  • 0.84.0(Aug 12, 2022)

    You can now preview the DDL statements which will be run by Piccolo migrations.

    piccolo migrations forwards my_app --preview

    Thanks to @AliSayyah for adding this feature.

    Source code(tar.gz)
    Source code(zip)
  • 0.83.0(Aug 11, 2022)

    We added support for Postgres read-slaves a few releases ago, but the batch clause didn't support it until now. Thanks to @guruvignesh01 for reporting this issue, and @sinisaos for help implementing it.

    # Returns 100 rows at a time from read_replica_db
    async with await Manager.select().batch(
    ) as batch:
        async for _batch in batch:
    Source code(tar.gz)
    Source code(zip)
  • 0.82.0(Jul 19, 2022)

    Traditionally, when instantiating a Table, you passed in column values using kwargs:

    >>> await Manager(name='Guido').save()

    You can now pass in a dictionary instead, which makes it easier for static typing analysis tools like Mypy to detect typos.

    >>> await Manager({Manager.name: 'Guido'}).save()

    See PR 565 for more info.

    Source code(tar.gz)
    Source code(zip)
  • 0.81.0(Jul 19, 2022)

    Added the returning clause to insert and update queries.

    This can be used to retrieve data from the inserted / modified rows.

    Here's an example, where we update the unpopular bands, and retrieve their names, in a single query:

    >>> await Band.update({
    ...     Band.popularity: Band.popularity + 5
    ... }).where(
    ...     Band.popularity < 10
    ... ).returning(
    ...     Band.name
    ... )
    [{'name': 'Bad sound band'}, {'name': 'Tone deaf band'}]
    Source code(tar.gz)
    Source code(zip)
  • 0.80.2(Jul 6, 2022)

    Fixed a bug with Combination.__str__, which meant that when printing out a query for debugging purposes it was wasn't showing correctly (courtesy @destos).

    Source code(tar.gz)
    Source code(zip)
  • 0.80.1(Jul 3, 2022)

    Fixed a bug with Piccolo Admin and _get_related_readable, which is used to show a human friendly identifier for a row, rather than just the ID.

    Thanks to @ethagnawl and @sinisaos for their help with this.

    Source code(tar.gz)
    Source code(zip)
  • 0.80.0(Jun 21, 2022)

    There was a bug when doing joins with a JSONB column with as_alias.

    class User(Table, tablename="my_user"):
        name = Varchar(length=120)
        config = JSONB(default={})
    class Subscriber(Table, tablename="subscriber"):
        name = Varchar(length=120)
        user = ForeignKey(references=User)
    async def main():
        # This was failing:
        await Subscriber.select(

    Thanks to @Anton-Karpenko for reporting this issue.

    Even though this is a bug fix, the minor version number has been bumped because the fix resulted in some refactoring of Piccolo's internals, so is a fairly big change.

    Source code(tar.gz)
    Source code(zip)
  • 0.79.0(Jun 17, 2022)

  • 0.78.0(Jun 16, 2022)

    Added the callback clause to select and objects queries (courtesy @backwardspy). For example:

    >>> await Band.select().callback(my_callback)

    The callback can be a normal function or async function, which is called when the query is successful. The callback can be used to modify the query's output.

    It allows for some interesting and powerful code. Here's a very simple example where we modify the query's output:

    >>> def get_uppercase_names() -> Select:
    ...     def make_uppercase(response):
    ...         return [{'name': i['name'].upper()} for i in response]
    ...    return Band.select(Band.name).callback(make_uppercase)
    >>> await get_uppercase_names().where(Band.manager.name == 'Guido')
    [{'name': 'PYTHONISTAS'}]

    Here's another example, where we perform validation on the query's output:

    >>> def get_concerts() -> Select:
    ...     def check_length(response):
    ...         if len(response) == 0:
    ...             raise ValueError('No concerts!')
    ...         return response
    ...     return Concert.select().callback(check_length)
    >>> await get_concerts().where(Concert.band_1.name == 'Terrible Band')
    ValueError: No concerts!

    At the moment, callbacks are just triggered when a query is successful, but in the future other callbacks will be added, to hook into more of Piccolo's internals.

    Source code(tar.gz)
    Source code(zip)
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

null 67 Sep 1, 2022
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Jan 8, 2023
The ormar package is an async mini ORM for Python, with support for Postgres, MySQL, and SQLite.

python async mini orm with fastapi in mind and pydantic validation

null 1.2k Jan 5, 2023
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.

Orator The Orator ORM provides a simple yet beautiful ActiveRecord implementation. It is inspired by the database part of the Laravel framework, but l

Sébastien Eustace 1.4k Jan 1, 2023
An async ORM. 🗃

ORM The orm package is an async ORM for Python, with support for Postgres, MySQL, and SQLite. ORM is built with: SQLAlchemy core for query building. d

Encode 1.7k Dec 28, 2022
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 318 Jan 3, 2023
Solrorm : A sort-of solr ORM for python

solrorm : A sort-of solr ORM for python solrpy - deprecated solrorm - currently in dev Usage Cores The first step to interact with solr using solrorm

Aj 1 Nov 21, 2021
Bringing Async Capabilities to django ORM

Bringing Async Capabilities to django ORM

Skander BM 119 Dec 1, 2022
A PostgreSQL or SQLite orm for Python

Prom An opinionated lightweight orm for PostgreSQL or SQLite. Prom has been used in both single threaded and multi-threaded environments, including en

Jay Marcyes 18 Dec 1, 2022
A simple project to explore the number of GCs when doing basic ORM work.

Question: Does Python do extremely too many GCs for ORMs? YES, OMG YES. Check this out Python Default GC Settings: SQLAlchemy - 20,000 records in one

Michael Kennedy 26 Jun 5, 2022
A dataclasses-based ORM framework

dcorm A dataclasses-based ORM framework. [WIP] - Work in progress This framework is currently under development. A first release will be announced in

HOMEINFO - Digitale Informationssysteme GmbH 1 Dec 24, 2021
ORM for Python for PostgreSQL.

New generation (or genius) ORM for Python for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
A new ORM for Python specially for PostgreSQL

A new ORM for Python specially for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
Sqlalchemy seeder that supports nested relationships.

sqlalchemyseed Sqlalchemy seeder that supports nested relationships. Supported file types json yaml csv Installation Default installation pip install

Jedy Matt Tabasco 10 Aug 13, 2022
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Dec 26, 2022
A Python Library for Simple Models and Containers Persisted in Redis

Redisco Python Containers and Simple Models for Redis Description Redisco allows you to store objects in Redis. It is inspired by the Ruby library Ohm

sebastien requiem 436 Nov 10, 2022
GINO Is Not ORM - a Python asyncio ORM on SQLAlchemy core.

GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. GINO 1.0 supports only PostgreSQL with

GINO Community 2.5k Dec 27, 2022
Tortoise ORM is an easy-to-use asyncio ORM inspired by Django.

Tortoise ORM was build with relations in mind and admiration for the excellent and popular Django ORM. It's engraved in it's design that you are working not with just tables, you work with relational data.

Tortoise 3.3k Jan 7, 2023
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

null 67 Sep 1, 2022
PyPika is a python SQL query builder that exposes the full richness of the SQL language using a syntax that reflects the resulting query. PyPika excels at all sorts of SQL queries but is especially useful for data analysis.

PyPika - Python Query Builder Abstract What is PyPika? PyPika is a Python API for building SQL queries. The motivation behind PyPika is to provide a s

KAYAK 1.9k Jan 4, 2023