Piccolo - A fast, user friendly ORM and query builder which supports asyncio.

Overview

Piccolo

Tests Release Documentation Status PyPI Language grade: Python Total alerts codecov

A fast, user friendly ORM and query builder which supports asyncio. Read the docs.

Features

Some of it’s stand out features are:

  • Support for sync and async.
  • A builtin playground, which makes learning a breeze.
  • Tab completion support - works great with iPython and VSCode.
  • Batteries included - a User model, authentication, migrations, an admin GUI, and more.
  • Modern Python - fully type annotated.

Syntax

The syntax is clean and expressive.

You can use it as a query builder:

# Select:
await Band.select(
    Band.name
).where(
    Band.popularity > 100
).run()

# Join:
await Band.select(
    Band.name,
    Band.manager.name
).run()

# Delete:
await Band.delete().where(
    Band.popularity < 1000
).run()

# Update:
await Band.update({Band.popularity: 10000}).where(
    Band.name == 'Pythonistas'
).run()

Or like a typical ORM:

# To create a new object:
b = Band(name='C-Sharps', popularity=100)
await b.save().run()

# To fetch an object from the database, and update it:
b = await Band.objects().where(Band.name == 'Pythonistas').first().run()
b.popularity = 10000
await b.save().run()

# To delete:
await b.remove().run()

Installation

Installing with PostgreSQL driver:

pip install piccolo[postgres]

Installing with SQLite driver:

pip install piccolo[sqlite]

Building a web app?

Let Piccolo scaffold you an ASGI web app, using Piccolo as the ORM:

piccolo asgi new

Starlette, FastAPI, and BlackSheep are currently supported.

Are you a Django user?

We have a handy page which shows the equivalent of common Django queries in Piccolo.

Documentation

See Read the docs.

Issues
  • support-custom-field-primary-key

    support-custom-field-primary-key

    Fixes #107 #32 and maybe #63 Here's my approach to support UUID priamry key:

    • Removed PrimaryKey column as it only supported Integer value types
    • Didn't define id for Table by default and checked if no PrimaryKey was defined for table then Serial PK would be added.
    • I think Serial column is Postgres specific, maybe change it to something like Django AutoField?

    This works fine in Postgres, but in sqlite after insert we're returning cursor.lastrowid which does not return our id but the special virtual column ROWID from sqlite. I could come up with two approaches:

    • Sqlite has added RETURNING clauses so we can do some sane INSERT query the same way as Postgres. But we will only support Sqlite 3.35 forwards: here
    • Or we have to get the cursor.lastrowid and do another query to get our id based on ROWID like this SELECT id FROM table WHERE ROWID = <cursor.lastrowid>. I wanted to get some feedback on this.

    Any of these should work now:

    class MyTableDefaultPrimaryKey(Table):
        name = Varchar()
    
    
    class MyTablePrimaryKeyInteger(Table):
        id = Integer(null=False, primary=True, key=True)
        name = Varchar()
    
    
    class MyTablePrimaryKeyUUID(Table):
        id = UUID(null=False, primary=True, key=True)
        name = Varchar()
    
    opened by aminalaee 38
  • Schema generation enhancements

    Schema generation enhancements

    Piccolo now has a command for generating a schema from an existing database.

    piccolo schema generate
    

    The current implementation doesn't cover every possible edge case and database feature, so there's room for improvement. For example:

    Column defaults

    Getting the defaults for a column, and reflecting them in the Piccolo column. For example, if the column has a default of 1 in the database, we want that to be reflected in the Piccolo schema as:

    class MyTable(Table):
        my_column = Integer(default=1)
    

    On Delete / On Update

    The database values aren't currently reflected in the ForeignKey column definitions.

    class MyTable(Table):
        my_column = ForeignKey(on_delete=OnDelete.cascade)
    

    Decimal precision

    The precision of a decimal column in the database currently isn't reflected in the Decimal column definitions.

    class MyTable(Table):
        my_column = Decimal(digits=(5,2))
    

    It's a fairly beginner friendly feature to work on, because even though the code is fairly advanced, it's completely self contained, and doesn't require extensive knowledge of the rest of Piccolo.

    enhancement good first issue 
    opened by dantownsend 24
  • Roadmap

    Roadmap

    • [x] JSON fields
    • [ ] More aggregation functions - e.g. SUM
    • [x] Aliases in SELECT queries (e.g. SELECT foo AS bar FROM my_table)
    • [ ] Improved support for data migrations
    • [x] Fixtures
    • [x] Schema visualisation tool
    • [x] piccolo sql_shell new command - execute SQL directly in a shell, via the configured engine
    • [x] piccolo shell new command - like the playground, but for your own tables
    • [x] Shorter aliases for commands in CLI, or tab completion
    • [ ] Improved documentation for how to test Piccolo apps
    • [ ] Nested transactions
    • [x] Move Pydantic integration from Piccolo API into Piccolo
    • [x] Allow the ForeignKey references argument to accept a string such as 'my_app.SomeTable' when circular imports are an issue.
    • [ ] Support Postgres Enum types.
    • [ ] Row level constraints.
    • [ ] Making foreign key constraints optional on ForeignKey columns.
    • [x] Allow UUIDs as primary keys?
    • [ ] Subqueries
    enhancement 
    opened by dantownsend 15
  • add-scripts-folder

    add-scripts-folder

    I think it'd be clear to move all the scripts to the scripts folder following this. I've moved all existing scripts and updated the github workflow. But here some questions/ideas:

    • Should we move piccolo.sh from root of project too?
    • Coverage is not enforced, should we add a minimum coverage so pipeline fails on missing coverage using something like Codecov?
    • release.sh can be a workflow in github to upload to PyPi when github release is created.
    opened by aminalaee 14
  • sample basic migrations ui updates

    sample basic migrations ui updates

    null

    opened by wmshort 13
  • BlockingIOError: [Errno 35] write could not complete without blocking

    BlockingIOError: [Errno 35] write could not complete without blocking

    From piccolo shell run the commands

    Book.select().limit(4).run_sync()

    and

    Book.select().offset(1).limit(4).run_sync()

    work fine, but...

    Book.select().limit(5).run_sync()

    displays the result and then raise BlockingIOError exception:

    ~/.venv/lib/python3.9/site-packages/IPython/core/displayhook.py in write_format_data(self, format_dict, md_dict)
        189
        190         try:
    --> 191             print(result_repr)
        192         except UnicodeEncodeError:
        193             # If a character is not supported by the terminal encoding replace
    
    BlockingIOError: [Errno 35] write could not complete without blocking---------------------------------------------------------------------------
    BlockingIOError                           Traceback (most recent call last)
    

    I don't know what's going on.

    opened by hipertracker 13
  • create multiple tables at once

    create multiple tables at once

    This PR adds the function described in this issue #233 and a unit test to test it out.

    opened by AliSayyah 12
  • Reflection

    Reflection

    A simple implementation of reflection. It's far from done obviously. Just wanted to keep you updated on the process and use your guidance along the way. TableStorage is a Singleton object so it can be accessed globally. ImmutableDict is implemented to restrict access to write/delete operations on tables attribute of TableStorage.

    >>> from piccolo.tablestorage import TableStorage
    >>> storage = TableStorage()
    >>> await storage.reflect(schema_name="music")
    >>> storage.tables  # `Table` objects are accessible from `storage.tables`.
    
    {"music.Band": <class 'Band'>, ... }
    
    >>> Band = storage.tables["music.Band"]
    >>> Band.select().run_sync()
    
    

    to-do:

    • ~~single table reflection ( needs changes in schema app.)~~ Done
    • caching
    • ~~performance improvements~~ Done

    Known problems:

    • ~~schema name doesn't get included in tablename.~~ fixed
    opened by AliSayyah 12
  • Better schema generation

    Better schema generation

    1. references from other schemas are supported.

    2. now schema generate will recursively create reference tables. For example: Table Schema1.A references Table Schema2.B, and Table Schema2.B references Table Schema2.C. now, Table Schema2.C will be created too.

    I'm going to add the tests later.

    opened by AliSayyah 12
  • Add defaults in automatic schema generation

    Add defaults in automatic schema generation

    Partially addresses #193 .

    Initial attempt at default parsing and generation. This PR is mainly intended to kick off review, and changes will almost certainly need to be made. But I need another set of eyeballs on it, to see if I am even on the right track!

    opened by wmshort 11
  • allow additional fields to be added to the Pydantic model's schema

    allow additional fields to be added to the Pydantic model's schema

    Related to https://github.com/piccolo-orm/piccolo_admin/issues/73

    opened by dantownsend 0
  • Fix auto migration import conflict2

    Fix auto migration import conflict2

    This PR attempts to fix conflicting/clashing globals in migration auto-generation (#306).

    To achieve this a the UniqueGlobalNames class has been added to serialisation.py. This class is meant to store global names as class attributes. The serialisation logic has been rewritten to check global names against the class and raise errors if there is a conflict.

    This approach should help future developers to write conflict free serialisation code. This however does come at the cost of readability is some places:

    Pros:

    • It is much harder to add conflicts to serialisation logic
    • Globals for column types are automatically discovered (in UniqueGlobalNamesMeta) and added to UniqueGlobalNames meaning that future conflicts may found without any developer intervention.

    Cons:

    • Developer must add new globals to UniqueGlobalNames class
    • Accessing UniqueGlobalNames.COLUMN_VARCHAR is less readable that simply using string "Varchar"
    opened by 0scarB 3
  • [BUG]: Decimal import conflict in auto generated migration

    [BUG]: Decimal import conflict in auto generated migration

    Auto-generating migrations for tables with columns of type Decimal causes conflicting imports

    from piccolo.columns.column_types import Decimal
    

    and

    from decimal import Decimal
    

    This means that the following code in the migration fails

    manager.add_column(
            ...
            column_class_name="Decimal",
            column_class=Decimal,  # Wrong `Decimal` - is `decimal.Decimal`, should be `piccolo.columns.column_types.Decimal`!!!
            params={
                "default": Decimal("0"),  # Correct `Decimal` - is `decimal.Decimal`
                "digits": None,
                "null": False,
                "primary_key": False,
                "unique": False,
                "index": False,
                "index_method": IndexMethod.btree,
                "choices": None,
                "db_column_name": None,
            },
        )
    

    because Decimal has type decimal.Decimal, not piccolo.columns.column_types.Decimal.

    This probably hasn't been discovered yet because most people use Numeric which does not have this problem.

    Quick Fix

    Use an absolute import or import alias in generation.

    Extensive Fix

    Although this certainly is an edge case, similar import conflicts may occur in the future. To future-proof new code one could, for example, check for conflicts during generation and fall back on absolute imports.

    Pro: Future-proof Con: Extra code-complexity


    piccolo version == 0.57.0

    Full example

    broken_decimal/tables.py

    from piccolo.table import Table
    from piccolo.columns import Decimal, Numeric
    
    
    class BrokenDecimalAutoMigration(Table):
        # This breaks migration auto generation because of `Decimal` import conflict.
        dec_col = Decimal()
    
        # This works
        # dec_col = Numeric()
    

    piccolo migrations new broken_decimal --auto generates broken_decimal/piccolo_migrations/2021-10-15T19-10-02-685456.py

    from piccolo.apps.migrations.auto import MigrationManager
    from decimal import Decimal
    from piccolo.columns.column_types import Decimal
    from piccolo.columns.indexes import IndexMethod
    
    
    ID = "2021-10-15T19:10:02:685456"
    VERSION = "0.57.0"
    DESCRIPTION = ""
    
    
    async def forwards():
        manager = MigrationManager(
            migration_id=ID, app_name="broken_decimal", description=DESCRIPTION
        )
    
        manager.add_table(
            "BrokenDecimalAutoMigration", tablename="broken_decimal_auto_migration"
        )
    
        manager.add_column(
            table_class_name="BrokenDecimalAutoMigration",
            tablename="broken_decimal_auto_migration",
            column_name="dec_col",
            db_column_name="dec_col",
            column_class_name="Decimal",
            column_class=Decimal,
            params={
                "default": Decimal("0"),
                "digits": None,
                "null": False,
                "primary_key": False,
                "unique": False,
                "index": False,
                "index_method": IndexMethod.btree,
                "choices": None,
                "db_column_name": None,
            },
        )
    
        return manager
    

    piccolo migrations forwards broken_decimal fails with error message The digits argument should be a tuple.

    opened by 0scarB 4
  • Reflection on a table with self-referencing ForeignKey does not complete

    Reflection on a table with self-referencing ForeignKey does not complete

    When running a reflection on a table that includes a self-referencing ForeignKey, such as the example below, the process hangs or never completes:

    class Musician(Table):
        name = Varchar(length=100)
        instructor = ForeignKey(references='self')
    

    After removing the foreign key or replacing with another table reference, the reflection completes as expected.

    opened by knguyen5 7
  • Explore Pydantic usage instead of dataclasses

    Explore Pydantic usage instead of dataclasses

    Piccolo uses dataclasses extensively, all over the code base.

    It's worth investigating if any noticeable performance improvement can be achieved by using Pydantic models instead.

    good first issue research 
    opened by dantownsend 2
  • suggestion: piccolo ecosystem in README

    suggestion: piccolo ecosystem in README

    Hi there. I think it is a good idea to have a whole section in README for introducing the different tools in the ecosystem. It would be much easier for newcomers to discover other aspects of the piccolo.

    opened by AliSayyah 1
  • Upserting

    Upserting

    Attempts to create an upserting method #252 . I have tried using update, insert and exist method. Please review @dantownsend

    opened by AbhijithGanesh 9
  • Enhanced object creation, using column references

    Enhanced object creation, using column references

    When instantiating a Table object, we do this (like in 99% of ORMs):

    Band(name="Pythonistas", popularity=1000)
    

    Some Piccolo queries allow you to pass in a dictionary mapping column references to values, instead of using kwargs. It's nice for tab completion, and also for catching errors.

    Band.update({
        Band.popularity: 2000
    }).run_sync()
    

    It would be good to have this ability for instantiating objects too. Something like:

    Band(_data={Band.name: "Pythonistas", Band.popularity: 2000})
    
    enhancement good first issue 
    opened by dantownsend 0
  • After inserting a fixture, update the primary key sequences

    After inserting a fixture, update the primary key sequences

    After inserting a fixture:

    https://github.com/piccolo-orm/piccolo/blob/8ec9d10313c9ac44916f01e27c9ef58eb22ae0d9/piccolo/apps/fixtures/commands/load.py#L57-L62

    For each primary key column we should run something like:

    SELECT setval('my_table_id_seq', (SELECT MAX(id) FROM my_table));
    

    To make sure subsequent inserts don't fail due to unique constraint errors.

    enhancement good first issue 
    opened by dantownsend 0
  • Add a `from_fixture` method to `BaseUser`

    Add a `from_fixture` method to `BaseUser`

    If you pass a password into the BaseUser constructor, it hashes it.

    user = BaseUser(username="bob", password="bob123")
    >>> user.password
    pbkdf2_sha256$10000$bb353d7fc92...
    

    This is the expected behaviour, but it's problematic if you're passing in an already hashed password (it will get hashed again).

    An example use case is fixtures - you dump the users table, and end up with data like this:

    {
        "user": {
            "BaseUser": [
                {
                    "id": 1,
                    "username": "bob",
                    "password": "pbkdf2_sha256$10000$abc123...",
                    "first_name": "",
                    "last_name": "",
                    "email": "[email protected]",
                    "active": true,
                    "admin": true,
                    "superuser": true,
                    "last_login": null
                },
            ]
        },
    

    We need a class method on BaseUser, called something like from_fixture (or some other, better name).

    class BaseUser(Table):
        @classmethod
        def from_fixture(self, data: t.Dict[str, t.Any]) -> BaseUser:
           # Create a BaseUser instance, but bypasses the password hashing, as we can assume the password is already hashed.
            ...
    

    We should also raise a ValueError in the Baseuser constructor if someone tries to pass in an already hashed password.

    enhancement good first issue 
    opened by dantownsend 0
Releases(0.57.0)
  • 0.57.0(Oct 13, 2021)

  • 0.56.0(Oct 11, 2021)

    Fixed schema generation bug

    When using piccolo schema generate to auto generate Piccolo Table classes from an existing database, it would fail in this situation:

    • A table has a column with an index.
    • The column name clashed with a Postgres type.

    For example, we couldn't auto generate this Table class:

    class MyTable(Table):
        time = Timestamp(index=True)
    

    This is because time is a builtin Postgres type, and the CREATE INDEX statement being inspected in the database wrapped the column name in quotes, which broke our regex.

    Thanks to @knguyen5 for fixing this.

    Improved testing docs

    A convenience method called get_table_classes was added to Finder.

    Finder is the main class in Piccolo for dynamically importing projects / apps / tables / migrations etc.

    get_table_classes lets us easily get the Table classes for a project. This makes writing unit tests easier, when we need to setup a schema.

    from unittest import TestCase
    
    from piccolo.table import create_tables, drop_tables
    from piccolo.conf.apps import Finder
    
    TABLES = Finder().get_table_classes()
    
    class TestApp(TestCase):
        def setUp(self):
            create_tables(*TABLES)
    
        def tearDown(self):
            drop_tables(*TABLES)
    
        def test_app(self):
            # Do some testing ...
            pass
    

    The docs were updated to reflect this.

    When dropping tables in a unit test, remember to use piccolo tester run, to make sure the test database is used.

    get_output_schema

    get_output_schema is the main entrypoint for database reflection in Piccolo. It has been modified to accept an optional Engine argument, which makes it more flexible.

    Source code(tar.gz)
    Source code(zip)
  • 0.55.0(Oct 6, 2021)

    Table._meta.refresh_db

    Added the ability to refresh the database engine.

    MyTable._meta.refresh_db()
    

    This causes the Table to fetch the Engine again from your piccolo_conf.py file. The reason this is useful, is you might change the PICCOLO_CONF environment variable, and some Table classes have already imported an engine. This is now used by the piccolo tester run command to ensure all Table classes have the correct engine.

    ColumnMeta edge cases

    Fixed an edge case where ColumnMeta couldn't be copied if it had extra attributes added to it.

    Improved column type conversion

    When running migrations which change column types, Piccolo now provides the USING clause to the ALTER COLUMN DDL statement, which makes it more likely that type conversion will be successful.

    For example, if there is an Integer column, and it's converted to a Varchar column, the migration will run fine. In the past, running this in reverse would fail. Now Postgres will try and cast the values back to integers, which makes reversing migrations more likely to succeed.

    Added drop_tables

    There is now a convenience function for dropping several tables in one go. If the database doesn't support CASCADE, then the tables are sorted based on their ForeignKey columns, so they're dropped in the correct order. It all runs inside a transaction.

    from piccolo.table import drop_tables
    
    drop_tables(Band, Manager)
    

    This is a useful tool in unit tests.

    Index support in schema generation

    When using piccolo schema generate, Piccolo will now reflect the indexes from the database into the generated Table classes. Thanks to @wmshort for this.

    Source code(tar.gz)
    Source code(zip)
  • 0.54.0(Oct 5, 2021)

    Added the db_column_name option to columns. This is for edge cases where a legacy database is being used, with problematic column names. For example, if a column is called class, this clashes with a Python builtin, so the following isn't possible:

    class MyTable(Table):
        class = Varchar()  # Syntax error!
    

    You can now do the following:

    class MyTable(Table):
        class_ = Varchar(db_column_name='class')
    

    Here are some example queries using it:

    # Create - both work as expected
    MyTable(class_='Test').save().run_sync()
    MyTable.objects().create(class_='Test').run_sync()
    
    # Objects
    row = MyTable.objects().first().where(MyTable.class_ == 'Test').run_sync()
    >>> row.class_
    'Test'
    
    # Select
    >>> MyTable.select().first().where(MyTable.class_ == 'Test').run_sync()
    {'id': 1, 'class': 'Test'}
    
    Source code(tar.gz)
    Source code(zip)
  • 0.53.0(Sep 30, 2021)

    An internal code clean up (courtesy @yezz123).

    Dramatically improved CLI appearance when running migrations (courtesy @wmshort).

    Screenshot 2021-09-29 at 21 06 49

    Added a runtime reflection feature, where Table classes can be generated on the fly from existing database tables (courtesy @AliSayyah). This is useful when dealing with very dynamic databases, where tables are frequently being added / modified, so hard coding them in a tables.py file is impractical. Also, for exploring databases on the command line. It currently just supports Postgres.

    Here's an example:

    from piccolo.table_reflection import TableStorage
    
    storage = TableStorage()
    Band = await storage.get_table('band')
    >>> await Band.select().run()
    [{'id': 1, 'name': 'Pythonistas', 'manager': 1}, ...]
    
    Source code(tar.gz)
    Source code(zip)
  • 0.52.0(Sep 26, 2021)

    Lots of improvements to piccolo schema generate:

    • Dramatically improved performance, by executing more queries in parallel (courtesy @AliSayyah).
    • If a table in the database has a foreign key to a table in another schema, this will now work (courtesy @AliSayyah).
    • The column defaults are now extracted from the database (courtesy @wmshort).
    • The scale and precision values for Numeric / Decimal column types are extracted from the database (courtesy @wmshort).
    • The ON DELETE and ON UPDATE values for ForeignKey columns are now extracted from the database (courtesy @wmshort).

    Added BigSerial column type (courtesy @aliereno).

    Added GitHub issue templates (courtesy @AbhijithGanesh).

    Source code(tar.gz)
    Source code(zip)
  • 0.51.1(Sep 25, 2021)

  • 0.51.0(Sep 21, 2021)

    Modified create_pydantic_model, so JSON and JSONB columns have a format attribute of 'json'. This will be used by Piccolo Admin for improved JSON support. Courtesy @sinisaos.

    Fixing a bug where the piccolo fixtures load command wasn't registered with the Piccolo CLI.

    Source code(tar.gz)
    Source code(zip)
  • 0.50.0(Sep 20, 2021)

    There are lots of great improvements in this release:

    where clause changes

    The where clause can now accept multiple arguments (courtesy @AliSayyah):

    Concert.select().where(
        Concert.venue.name == 'Royal Albert Hall',
        Concert.band_1.name == 'Pythonistas'
    ).run_sync()
    

    It's another way of expressing AND. It's equivalent to both of these:

    Concert.select().where(
        Concert.venue.name == 'Royal Albert Hall'
    ).where(
        Concert.band_1.name == 'Pythonistas'
    ).run_sync()
    
    Concert.select().where(
        (Concert.venue.name == 'Royal Albert Hall') & (Concert.band_1.name == 'Pythonistas')
    ).run_sync()
    

    create method

    Added a create method, which is an easier way of creating objects (courtesy @AliSayyah).

    # This still works:
    band = Band(name="C-Sharps", popularity=100)
    band.save().run_sync()
    
    # But now we can do it in a single line using `create`:
    band = Band.objects().create(name="C-Sharps", popularity=100).run_sync()
    

    piccolo schema generate bug fix

    Fixed a bug with piccolo schema generate where columns with unrecognised column types were omitted from the output (courtesy @AliSayyah).

    --trace docs

    Added docs for the --trace argument, which can be used with Piccolo commands to get a traceback if the command fails (courtesy @hipertracker).

    DoublePrecision column type

    Added DoublePrecision column type, which is similar to Real in that it stores float values. However, those values are stored with greater precision (courtesy @AliSayyah).

    AppRegistry improvements

    Improved AppRegistry, so if a user only adds the app name (e.g. blog), instead of blog.piccolo_app, it will now emit a warning, and will try to import blog.piccolo_app (courtesy @aliereno).

    Source code(tar.gz)
    Source code(zip)
  • 0.49.0(Sep 16, 2021)

    Fixed a bug with create_pydantic_model when used with a Decimal / Numeric column when no digits arguments was set (courtesy @AliSayyah).

    Added the create_tables function, which accepts a sequence of Table subclasses, then sorts them based on their ForeignKey columns, and creates them. This is really useful for people who aren't using migrations (for example, when using Piccolo in a simple data science script). Courtesy @AliSayyah.

    from piccolo.tables import create_tables
    
    create_tables(Band, Manager, if_not_exists=True)
    
    # Equivalent to:
    Manager.create_table(if_not_exists=True).run_sync()
    Band.create_table(if_not_exists=True).run_sync()
    

    Fixed typos with the new fixtures app - sometimes it was referred to as fixture and other times fixtures. It's now standardised as fixtures (courtesy @hipertracker).

    Source code(tar.gz)
    Source code(zip)
  • 0.48.0(Sep 15, 2021)

    The piccolo user create command can now be used without using the interactive prompt, by passing in command line arguments instead (courtesy @AliSayyah).

    For example piccolo user create --username=bob ....

    This is useful when you want to create users in a script.

    Source code(tar.gz)
    Source code(zip)
  • 0.47.0(Sep 14, 2021)

  • 0.46.0(Sep 14, 2021)

    Added the fixtures app. This is used to dump data from a database to a JSON file, and then reload it again. It's useful for seeding a database with essential data, whether that's a colleague setting up their local environment, or deploying to production.

    To create a fixture:

    piccolo fixtures dump --apps=blog > fixture.json
    

    To load a fixture:

    piccolo fixtures load fixture.json
    

    As part of this change, Piccolo's Pydantic support was brought into this library (prior to this it only existed within the piccolo_api library). At a later date, the piccolo_api library will be updated, so it's Pydantic code just proxies to what's within the main piccolo library.

    Source code(tar.gz)
    Source code(zip)
  • 0.45.1(Sep 10, 2021)

    Improvements to piccolo schema generate. It's now smarter about which imports to include. Also, the Table classes output will now be sorted based on their ForeignKey columns. Internally the sorting algorithm has been changed to use the graphlib module, which was added in Python 3.9.

    Source code(tar.gz)
    Source code(zip)
  • 0.45.0(Sep 9, 2021)

    Added the piccolo schema graph command for visualising your database structure, which outputs a Graphviz file. It can then be turned into an image, for example:

    piccolo schema map | dot -Tpdf -o graph.pdf
    

    Also made some minor changes to the ASGI templates, to reduce MyPy errors.

    Source code(tar.gz)
    Source code(zip)
  • 0.44.1(Sep 8, 2021)

    Updated to_dict so it works with nested objects, as introduced by the prefetch functionality in v0.44.0

    For example:

    band = Band.objects(Band.manager).first().run_sync()
    
    >>> band.to_dict()
    {'id': 1, 'name': 'Pythonistas', 'manager': {'id': 1, 'name': 'Guido'}}
    

    It also works with filtering:

    >>> band.to_dict(Band.name, Band.manager.name)
    {'name': 'Pythonistas', 'manager': {'name': 'Guido'}}
    
    Source code(tar.gz)
    Source code(zip)
  • 0.44.0(Sep 7, 2021)

    Added the ability to prefetch related objects. Here's an example:

    band = await Band.objects(Band.manager).run()
    >>> band.manager
    <Manager: 1>
    

    If a table has a lot of ForeignKey columns, there's a useful shortcut, which will return all of the related rows as objects.

    concert = await Concert.objects(Concert.all_related()).run()
    >>> concert.band_1
    <Band: 1>
    >>> concert.band_2
    <Band: 2>
    >>> concert.venue
    <Venue: 1>
    

    Thanks to @wmshort for all the input.

    Source code(tar.gz)
    Source code(zip)
  • 0.43.0(Sep 2, 2021)

  • 0.42.0(Sep 1, 2021)

    You can now use all_columns at the root. For example:

    await Band.select(
        Band.all_columns(),
        Band.manager.all_columns()
    ).run()
    

    You can also exclude certain columns if you like:

    await Band.select(
        Band.all_columns(exclude=[Band.id]),
        Band.manager.all_columns(exclude=[Band.manager.id])
    ).run()
    
    Source code(tar.gz)
    Source code(zip)
  • 0.41.1(Aug 31, 2021)

    Fixes a regression where if multiple tables are created in a single migration file, it could potentially fail by applying them in the wrong order.

    Source code(tar.gz)
    Source code(zip)
  • 0.41.0(Aug 31, 2021)

    Fixed a bug where if all_columns was used two or more levels deep, it would fail. Thanks to @wmshort for reporting this issue.

    Here's an example:

    Concert.select(
        Concert.venue.name,
        *Concert.band_1.manager.all_columns()
    ).run_sync()
    

    Also, the ColumnsDelegate has now been tweaked, so unpacking of all_columns is optional.

    # This now works the same as the code above (we have omitted the *)
    Concert.select(
        Concert.venue.name,
        Concert.band_1.manager.all_columns()
    ).run_sync()
    
    Source code(tar.gz)
    Source code(zip)
  • 0.40.1(Aug 30, 2021)

  • 0.40.0(Aug 29, 2021)

    Added nested output option, which makes the response from a select query use nested dictionaries:

    >>> await Band.select(Band.name, *Band.manager.all_columns()).output(nested=True).run()
    [{'name': 'Pythonistas', 'manager': {'id': 1, 'name': 'Guido'}}]
    

    Thanks to @wmshort for the input.

    Source code(tar.gz)
    Source code(zip)
  • 0.39.0(Aug 28, 2021)

    Added to_dict method to Table.

    If you just use __dict__ on a Table instance, you get some non-column values. By using to_dict it's just the column values. Here's an example:

    class MyTable(Table):
        name = Varchar()
    
    instance = MyTable.objects().first().run_sync()
    
    >>> instance.__dict__
    {'_exists_in_db': True, 'id': 1, 'name': 'foo'}
    
    >>> instance.to_dict()
    {'id': 1, 'name': 'foo'}
    

    Thanks to @wmshort for the idea, and @aminalaee and @sinisaos for investigating edge cases.

    Source code(tar.gz)
    Source code(zip)
  • 0.38.2(Aug 26, 2021)

  • 0.38.1(Aug 26, 2021)

    Minor changes to get_or_create to make sure it handles joins correctly.

    instance = (
        Band.objects()
        .get_or_create(
            (Band.name == "My new band")
            & (Band.manager.name == "Excellent manager")
        )
        .run_sync()
    )
    

    In this situation, there are two columns called 'name' - we need to make sure the correct value is applied if the row doesn't exist.

    Source code(tar.gz)
    Source code(zip)
  • 0.38.0(Aug 25, 2021)

    get_or_create now supports more complex where clauses. For example:

      row = await Band.objects().get_or_create(
          (Band.name == 'Pythonistas') & (Band.popularity == 1000)
      ).run()
    

    And you can find out whether the row was created or not using row._was_created.

    Thanks to @wmshort for reporting this issue.

    Source code(tar.gz)
    Source code(zip)
  • 0.37.0(Aug 25, 2021)

  • 0.36.0(Aug 25, 2021)

    Fixed an issue where like and ilike clauses required a wildcard (%). For example:

    await Manager.select().where(Manager.name.ilike('Guido%')).run()
    

    You can now omit wildcards if you like:

    await Manager.select().where(Manager.name.ilike('Guido')).run()
    

    Which would match on 'guido' and 'Guido', but not 'Guidoxyz'.

    Thanks to @wmshort for reporting this issue.

    Source code(tar.gz)
    Source code(zip)
  • 0.35.0(Aug 25, 2021)

    • Improved PrimaryKey deprecation warning (courtesy @tonybaloney).
    • Added piccolo schema generate which creates a Piccolo schema from an existing database.
    • Added piccolo tester run which is a wrapper around pytest, and temporarily sets PICCOLO_CONF, so a test database is used.
    • Added the get convenience method (courtesy @aminalaee). It returns the first matching record, or None if there's no match. For example:
    manager = await Manager.objects().get(Manager.name == 'Guido').run()
    
    # This is equivalent to:
    manager = await Manager.objects().where(Manager.name == 'Guido').first().run()
    
    Source code(tar.gz)
    Source code(zip)
An async ORM. 🗃

ORM The orm package is an async ORM for Python, with support for Postgres, MySQL, and SQLite. ORM is built with: SQLAlchemy core for query building. d

Encode 1.4k Oct 11, 2021
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.

Orator The Orator ORM provides a simple yet beautiful ActiveRecord implementation. It is inspired by the database part of the Laravel framework, but l

Sébastien Eustace 1.3k Oct 17, 2021
A curated list of awesome tools for SQLAlchemy

Awesome SQLAlchemy A curated list of awesome extra libraries and resources for SQLAlchemy. Inspired by awesome-python. (See also other awesome lists!)

Hong Minhee (洪 民憙) 2.3k Oct 21, 2021
The ormar package is an async mini ORM for Python, with support for Postgres, MySQL, and SQLite.

python async mini orm with fastapi in mind and pydantic validation

null 674 Oct 23, 2021
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 8.7k Oct 25, 2021
Pony Object Relational Mapper

Downloads Pony Object-Relational Mapper Pony is an advanced object-relational mapper. The most interesting feature of Pony is its ability to write que

null 2.7k Oct 15, 2021
A very simple CRUD class for SQLModel! ✨

Base SQLModel A very simple CRUD class for SQLModel! ✨ Inspired on: Full Stack FastAPI and PostgreSQL - Base Project Generator FastAPI Microservices I

Marcelo Trylesinski 12 Oct 22, 2021
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

null 1.7k Oct 23, 2021
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.6k Oct 24, 2021
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 189 Oct 23, 2021
A Python Library for Simple Models and Containers Persisted in Redis

Redisco Python Containers and Simple Models for Redis Description Redisco allows you to store objects in Redis. It is inspired by the Ruby library Ohm

sebastien requiem 434 Oct 1, 2021
SQLModel is a library for interacting with SQL databases from Python code, with Python objects.

SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.

Sebastián Ramírez 5.5k Oct 24, 2021
Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB

Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic.

Roman 277 Oct 18, 2021
A pure Python Database Abstraction Layer

pyDAL pyDAL is a pure Python Database Abstraction Layer. It dynamically generates the SQL/noSQL in realtime using the specified dialect for the databa

null 397 Oct 23, 2021
Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.6k Oct 22, 2021
Rich Python data types for Redis

Created by Stephen McDonald Introduction HOT Redis is a wrapper library for the redis-py client. Rather than calling the Redis commands directly from

Stephen McDonald 273 Sep 2, 2021
MongoEngine flask extension with WTF model forms support

Flask-MongoEngine Info: MongoEngine for Flask web applications. Repository: https://github.com/MongoEngine/flask-mongoengine About Flask-MongoEngine i

MongoEngine 788 Oct 14, 2021
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.1k Oct 18, 2021