a small, expressive orm -- supports postgresql, mysql and sqlite

Overview

http://media.charlesleifer.com/blog/photos/peewee3-logo.png

peewee

Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use.

  • a small, expressive ORM
  • python 2.7+ and 3.4+ (developed with 3.6)
  • supports sqlite, mysql, postgresql and cockroachdb
  • tons of extensions
https://travis-ci.org/coleifer/peewee.svg?branch=master

New to peewee? These may help:

Examples

Defining models is similar to Django or SQLAlchemy:

from peewee import *
import datetime


db = SqliteDatabase('my_database.db')

class BaseModel(Model):
    class Meta:
        database = db

class User(BaseModel):
    username = CharField(unique=True)

class Tweet(BaseModel):
    user = ForeignKeyField(User, backref='tweets')
    message = TextField()
    created_date = DateTimeField(default=datetime.datetime.now)
    is_published = BooleanField(default=True)

Connect to the database and create tables:

db.connect()
db.create_tables([User, Tweet])

Create a few rows:

charlie = User.create(username='charlie')
huey = User(username='huey')
huey.save()

# No need to set `is_published` or `created_date` since they
# will just use the default values we specified.
Tweet.create(user=charlie, message='My first tweet')

Queries are expressive and composable:

# A simple query selecting a user.
User.get(User.username == 'charlie')

# Get tweets created by one of several users.
usernames = ['charlie', 'huey', 'mickey']
users = User.select().where(User.username.in_(usernames))
tweets = Tweet.select().where(Tweet.user.in_(users))

# We could accomplish the same using a JOIN:
tweets = (Tweet
          .select()
          .join(User)
          .where(User.username.in_(usernames)))

# How many tweets were published today?
tweets_today = (Tweet
                .select()
                .where(
                    (Tweet.created_date >= datetime.date.today()) &
                    (Tweet.is_published == True))
                .count())

# Paginate the user table and show me page 3 (users 41-60).
User.select().order_by(User.username).paginate(3, 20)

# Order users by the number of tweets they've created:
tweet_ct = fn.Count(Tweet.id)
users = (User
         .select(User, tweet_ct.alias('ct'))
         .join(Tweet, JOIN.LEFT_OUTER)
         .group_by(User)
         .order_by(tweet_ct.desc()))

# Do an atomic update
Counter.update(count=Counter.count + 1).where(Counter.url == request.url)

Check out the example twitter app.

Learning more

Check the documentation for more examples.

Specific question? Come hang out in the #peewee channel on irc.freenode.net, or post to the mailing list, http://groups.google.com/group/peewee-orm . If you would like to report a bug, create a new issue on GitHub.

Still want more info?

http://media.charlesleifer.com/blog/photos/wat.jpg

I've written a number of blog posts about building applications and web-services with peewee (and usually Flask). If you'd like to see some real-life applications that use peewee, the following resources may be useful:

Issues
  • Sqlite database locked - regression introduced in 2.4.5?

    Sqlite database locked - regression introduced in 2.4.5?

    Below an excerpt from my CherryPy app.

    The app receives bursts of ajax calls to cncprogram_done. Sometimes just 1 or 2 calls, sometimes up to 10 or 20 at a time (I will eventually optimize it, but for now I'm OK with it).

    I just upgraded to Peewee 2.8.3 and only the first call of each burst works, the other calls say peewee.OperationalError: database is locked while executing part.save().

    I tried to revert to older versions, and I found out that with Peewee 2.4.4 works well and with the 2.4.5 fails.

    I reverted back to 2.4.4 on the production server and everything seems to work fine.

    Any idea whether I am doing something wrong or this is really a regression? Where do I start investigating?

    Here is (part of) the code:

    db = peewee.SqliteDatabase(path_name + '/doc.db', threadlocals=True)
    
    class PeeweeModel(peewee.Model):
        class Meta:
            database = db
    
    class CncProgramPart(PeeweeModel):
        sheet = peewee.ForeignKeyField(CncProgramSheet, related_name='parts')
        part_number = peewee.CharField()
        time_finished = peewee.DateTimeField(default=0)
        remake = peewee.BooleanField(default=False)
        comment = peewee.CharField(default='')
    
        def render(self, settings):
            return render('cncprogram_edit_part_row.html',
                          {'part': self, 'settings': settings})
    
    class DocFinder:
    
        @cherrypy.expose
        def cncprogram_done(self, rowid, value):
    
            with db.transaction():
                checked = value == 'true'
                now = time.time()
                part = CncProgramPart.get(CncProgramPart.id == rowid[9:])
    
                if checked and not part.remake and not part.comment:
                    part.time_finished = now
                    part.comment = ''
                    part.remake = False
                    part.save()
                elif part.time_finished:
                    part.time_finished = 0
                    part.save()
    
                return part.render(Settings())
    
    opened by stenci 40
  • Documentation Improvement - Master Issue

    Documentation Improvement - Master Issue

    The purpose of this issue is to collect suggestions for how the documentation might be improved.

    Some of the things I'm wondering:

    • Is it easy to find the information you're looking for?
    • Does the organization of the sections and subsections make it easy to find what you need?
    • Are some sections too verbose? Too terse?
    • Do we need more examples?
    • Is there something you frequently find that you have to look-up or are confused by?
    opened by coleifer 31
  • Losing MySQL connections even though using Pool etc.

    Losing MySQL connections even though using Pool etc.

    Hi, so I'm writing a Python Thrift service and use Peewee as my ORM. Our DB is on a MySQL server. I've had many different problems with the "MySQL server going away" and googling it I found these two tips: a) Use the playhouse connection pool b) After connecting, call database.get_conn().ping(True) (though I feel like I need to add that inside the pool code?)

    So right now I'm using a pool both for write-connections as well as the read-slaves. I experimented with a bunch of different values for the pool parameters, but at the moment I am at:

    max_connections=100, connect_timeout=20, stale_timeout=60, threadlocals=True
    

    Problem is, I still regularly get into the position where the connection goes away during an operation, or in between operations... Which would be fine, but in that case Peewee is failing completely to reconnect, even though that is exactly what the pool should do? E.g. this morning I got a "OperationalError: (2006, "MySQL server has gone away (error(32, 'Broken pipe'))")" when trying to run something on my service, and, alarmingly, it did not work even when sending new requests, i.e. Peewee did not even try to reconnect. The MySQL server was all up and running though and simply restarting the service fixed it.

    From time to time I will also get a "OperationalError: (2013, "Lost connection to MySQL server during query (timeout('timed out',))")", which is probably some server issue, but I'd really really like Peewee to reconnect automatically so that I don't have to manually "try except retry" every single Peewee command I run.

    Do you have any help for me how I can make definitely sure that my queries don't fail, and if they do, how to reconnect? Also, how could that "Server gone away" happen, even after resending the query? Below I attached two example stack traces for exceptions.

    Thanks, Max

      File "/var/www/model.py", line 216, in find_all_general
        total = query.select(Resource).count()
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2366, in count
        return self.wrapped_count(clear_limit=clear_limit)
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2379, in wrapped_count
        return rq.scalar() or 0
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2135, in scalar
        row = self._execute().fetchone()
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2126, in _execute
        return self.database.execute_sql(sql, params, self.require_commit)
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2728, in execute_sql
        self.commit()
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2597, in __exit__
        reraise(new_type, new_type(*exc_value.args), traceback)
      File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2720, in execute_sql
        cursor.execute(sql, params or ())
      File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 132, in execute
        result = self._query(query)
      File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 271, in _query
        conn.query(q)
      File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 725, in query
        self._execute_command(COM_QUERY, sql)
      File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 888, in _execute_command
        self._write_bytes(prelude + sql[:chunk_size-1])
      File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 848, in _write_bytes
        raise OperationalError(2006, "MySQL server has gone away (%r)" % (e,))
    OperationalError: (2006, "MySQL server has gone away (error(32, 'Broken pipe'))")
    
    
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 3385, in get
        return sq.get()
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2389, in get
        return clone.execute().next()
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2430, in execute
        self._qr = ResultWrapper(model_class, self._execute(), query_meta)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2126, in _execute
        return self.database.execute_sql(sql, params, self.require_commit)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2728, in execute_sql
        self.commit()
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2597, in __exit__
        reraise(new_type, new_type(*exc_value.args), traceback)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/peewee.py", line 2720, in execute_sql
        cursor.execute(sql, params or ())
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/cursors.py", line 132, in execute
        result = self._query(query)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/cursors.py", line 271, in _query
        conn.query(q)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 726, in query
        self._affected_rows = self._read_query_result(unbuffered=unbuffered)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 861, in _read_query_result
        result.read()
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 1064, in read
        first_packet = self.connection._read_packet()
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 825, in _read_packet
        packet = packet_type(self)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 242, in __init__
        self._recv_packet(connection)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 248, in _recv_packet
        packet_header = connection._read_bytes(4)
      File "/Users/max/code/content-service/venv-cs/lib/python2.7/site-packages/pymysql/connections.py", line 838, in _read_bytes
        2013, "Lost connection to MySQL server during query (%r)" % (e,))
    OperationalError: (2013, "Lost connection to MySQL server during query (timeout('timed out',))")
    
    opened by cpury 31
  • Proposal for plugins support

    Proposal for plugins support

    There is currently no easy way to build third party plugins within the same namespace as peewee, aside from monkeypatching.

    Projects such as pytest allow plugins to be registered into the Peewee namespace, giving a much cleaner syntax, as seen here, for example;

    from pytest_raisesregexp import raisesregexp # without plugins
    from pytest import raises_regexp # with plugins
    

    This is apparently achieved via entry_points as seen here, and allows the plugin to interact via hooks, as seen here. In particular, the plugin can access pytest_namespace which allows manipulation of the namespace.

    The use case in this situation would be for providing third party plugins which are not suitable for the core or playhouse, whilst keeping everything in a single namespace.

    from peewee_dbmanager import DBManager # without plugins
    from peewee.dbmanager import DBManager # with plugins OR
    from peewee import DBManager # with plugins
    

    This could also open the door for potentially cleaning up Playhouse, allowing each of the extensions to be segregated into it's own plugin, keeping the core repo clean and lean.

    Thoughts @coleifer ?

    opened by foxx 28
  • Allow to define database dynamically

    Allow to define database dynamically

    Peewee is awesome :+1: :)

    I'm using Peewee in my Tornado-powered project and I need to define database connection dynamically, like that:

    import tornado
    from peewee import *
    
    class Application(tornado.web.Application):
    
        def __init__(self, **kwargs):
            # Some init stuff ...
    
            # Setup DB and Models
    
            self.database = PostgresqlDatabase('mydb', user='postgres')
    
            import myapp.users.models as users
            self.User = self._get_model(users.User, self.database)
    
            # etc...
    
        def _get_model(self, model, db):
            model._meta.database = db
            model.create_table(True)
            return model
    

    So, I'm using private undocumented _meta property and I'm not feeling ok about that and I'm not sure what is the best design decision for that.

    Anybody have ideas?

    opened by rudyryk 26
  • ImproperlyConfigured: MySQLdb must be installed.

    ImproperlyConfigured: MySQLdb must be installed.

    I am getting this any time I am trying to create a table or do something with connected DB, althrough I installed MySQL-python lib and it looks working

    opened by Casyfill 24
  • Allow asynchronous queries

    Allow asynchronous queries

    Hard one, as peewee has been built on top of synchronous DB driver up untill then, but with Python 3.4 comming and shipping with asyncio + yield from, this can be an interesting for Python 3 users.

    opened by sametmax 23
  • get_or_create failing

    get_or_create failing

    Hello,

    So millions of times get_or_create works fine, but sometimes I am crashing on what is a legal select statement ! Using newest peewee ...

    As a test would like to try the get and create manually, but am unsure how to write that now.

    Here are some of what I have found. The table t1 has a primary key named "code". There is no "id" defined on the table. AND if I run this query select below in psql it works fine ... but for some reason peewee crashes with it ...

    Naturally I have a dictionary with the value {'code': 'LR091816'}
    How would I do a select manually in peewee passing in this dictionary, figure it would be a good test. I tried model.select().where(d) but that did not seem to work.

    Here is the log ...

    ('SELECT "t1"."code" FROM "game" AS "t1" WHERE ("t1"."code" = %s) LIMIT %s OFFSET %s', [u'LR091816', 1, 0]) * get create failed! Traceback (most recent call last): File "/home/anon/Desktop/test/precast_ingest_util.py", line 146, in create_or_increment model.get_or_create(**kwargs) # the select fails File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 5344, in get_or_create return query.get(), False File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 5705, in get return clone.execute()[0] File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 1522, in inner return method(self, database, *args, **kwargs) File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 1593, in execute return self._execute(database) File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 1747, in _execute cursor = database.execute(self) File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 2590, in execute return self.execute_sql(sql, params, commit=commit) File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 2584, in execute_sql self.commit() File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 2377, in exit reraise(new_type, new_type(*exc_args), traceback) File "/home/anon/miniconda2/lib/python2.7/site-packages/peewee.py", line 2577, in execute_sql cursor.execute(sql, params or ()) InternalError: current transaction is aborted, commands ignored until end of transaction block

    current transaction is aborted, commands ignored until end of transaction block

    {'code': 'LR091816'}

    opened by ra-esmith 23
  • pwiz not generating foreign key fields for postgres

    pwiz not generating foreign key fields for postgres

    Hi @coleifer

    I have the same issue as https://github.com/coleifer/peewee/issues/207 with postgres however

    I used the command python -m pwiz -e postgresql -u USERNAME -P DATABASE -H HOST-p PORT-s SCHEMA> model.py

    My model did not contain any "ForeignKeyField"

    Any thoughts? Does pwiz create foreign key for postgres?

    opened by Mak-NOAA 22
  • I want to change order of table creation for create_tables()

    I want to change order of table creation for create_tables()

    Basically, I need to make Peewee know that some of my tables are inherited from other tables, and this means that the «parents» should be created before «children».

    Can you please move some logic out from sort_models_topologically(), so that I could override it in my classes and customize list of a table's dependencies (which now includes only those tables to which it has foreign keys referencing)? Or, could you make support for inheritance in Peewee natively?

    opened by maaaks 21
  • Support CTE in ModelInsert?

    Support CTE in ModelInsert?

    I have the following use case. I have a table tic_v8 from which I want to get the columns ra and dec and insert them into a table catalog, also assigning a serial primary key (catalogid). In the same query I want to retrieve the newly assigned catalogids and tic_v8.id and add them to a relational table catalog_to_tic_v8. My real query is a bit more complicated but this should illustrate.

    The only way I've found to do this (I'd be interested to know if there is a better way) is to first populate the relational table with a sequential catalogid and then populate catalog. I can do this in SQL with a couple CTEs

    WITH x AS (SELECT * FROM tic_v8 ORDER BY id ASC), 
    	y AS (INSERT INTO catalog_to_tic_v8 (catalogid, target_id) SELECT ROW_NUMBER() OVER(), id FROM x ORDER BY x.id RETURNING catalogid, target_id) 
    INSERT INTO catalog (catalogid, ra, dec) SELECT catalogid, t.ra, t.dec FROM y join tic_v8 t ON y.target_id = t.id;
    

    (I understand that for this one-to-one relationship I would not need a relational table, but in a different query I'll need to populate this table with one-to-many cases).

    In Peewee, if I try to do something like this

    x = TIC.select().cte('x')
    y = CatalogToTIC_v8.insert_from(x.select(fn.row_number().over(), x.c.id).order_by(x.c.id.asc()), [CatalogToTIC_v8.catalogid, CatalogToTIC_v8.target_id]).returning(CatalogToTIC_v8)).cte('y')
    q = Catalog.insert_from(y.select(y.c.catalogid, TIC.ra, TIC.dec).join(TIC, on=y.c.target_id==TIC.id), [Catalog.catalogid, Catalog.ra, Catalog.dec]).returning().with_cte(x, y)
    

    that would fail because a ModelInsert cannot be a CTE. PostgreSQL allows it and actually I can do

    y = peewee.CTE('y', CatalogToTIC_v8.insert_from(x.select(fn.row_number().over(), x.c.id).order_by(x.c.id.asc()), [CatalogToTIC_v8.catalogid, CatalogToTIC_v8.target_id]).returning(CatalogToTIC_v8)))
    q = Catalog.insert_from(y.select(y.c.catalogid, TIC.ra, TIC.dec).join(TIC, on=y.c.target_id==TIC.id), [Catalog.catalogid, Catalog.ra, Catalog.dec]).returning().with_cte(x, y)
    

    Would it make sense to add the .cte() method to the ModelInsert?

    opened by albireox 0
Releases(3.14.4)
  • 3.14.4(Mar 19, 2021)

    This release contains an important fix for a regression introduced by commit ebe3ad5, which affected the way model instances are converted to parameters for use in expressions within a query. The bug could manifest when code uses model instances as parameters in expressions against fields that are not foreign-keys.

    The issue is described in #2376.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.14.3(Mar 11, 2021)

    This release contains a single fix for ensuring NULL values are inserted when issuing a bulk-insert of heterogeneous dictionaries which may be missing explicit NULL values. Fixes issue #2638.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.14.2(Mar 4, 2021)

    This is a small release mainly to get some fixes out.

    • Support for named Check and foreign-key constraints.
    • Better foreign-key introspection for CockroachDB (and Postgres).
    • Register UUID adapter for Postgres.
    • Add fn.array_agg() to blacklist for automatic value coercion.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.14.1(Feb 7, 2021)

    This release contains primarily bugfixes.

    • Properly delegate to a foreign-key field's db_value() function when converting model instances. #2304.
    • Strip quote marks and parentheses from column names returned by sqlite cursor when a function-call is projected without an alias. #2305.
    • Fix DataSet.create_index() method, #2319.
    • Fix column-to-model mapping in model-select from subquery with joins, #2320.
    • Improvements to foreign-key lazy-loading thanks @conqp, #2328.
    • Preserve and handle CHECK() constraints in Sqlite migrator, #2343.
    • Add stddev aggregate function to collection of sqlite user-defined funcs.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.14.0(Nov 7, 2020)

    This release has been a bit overdue and there are numerous small improvements and bug-fixes. The bugfix that prompted this release is #2293, which is a regression in the Django-inspired .filter() APIs that could cause some filter expressions to be discarded from the generated SQL. Many thanks for the excellent bug report, Jakub.

    • Add an experimental helper, shortcuts.resolve_multimodel_query(), for resolving multiple models used in a compound select query.
    • Add a lateral() method to select query for use with lateral joins, refs issue #2205.
    • Added support for nested transactions (savepoints) in cockroach-db (requires 20.1 or newer).
    • Automatically escape wildcards passed to string-matching methods, refs #2224.
    • Allow index-type to be specified on MySQL, refs #2242.
    • Added a new API, converter() to be used for specifying a function to use to convert a row-value pulled off the cursor, refs #2248.
    • Add set() and clear() method to the bitfield flag descriptor, refs #2257.
    • Add support for range types with IN and other expressions.
    • Support CTEs bound to compound select queries, refs #2289.

    Bug-fixes

    • Fix to return related object id when accessing via the object-id descriptor, when the related object is not populated, refs #2162.
    • Fix to ensure we do not insert a NULL value for a primary key.
    • Fix to conditionally set the field/column on an added column in a migration, refs #2171.
    • Apply field conversion logic to model-class values. Relocates the logic from issue #2131 and fixes #2185.
    • Clone node before modifying it to be flat in an enclosed nodelist expr, fixes issue #2200.
    • Fix an invalid item assignment in nodelist, refs #2220.
    • Fix an incorrect truthiness check used with save() and only=, refs #2269.
    • Fix regression in filter() where using both *args and **kwargs caused the expressions passed as args to be discarded. See #2293.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.13.3(Apr 24, 2020)

    • Allow arbitrary keyword arguments to be passed to DataSet constructor, which are then passed to the instrospector.
    • Allow scalar subqueries to be compared using numeric operands.
    • Fix bulk_create() when model being inserted uses FK identifiers.
    • Fix bulk_update() so that PK values are properly coerced to the right data-type (e.g. UUIDs to strings for Sqlite).
    • Allow array indices to be used as dict keys, e.g. for the purposes of updating a single array index value.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.13.2(Mar 27, 2020)

    • Allow aggregate functions to support an ORDER BY clause, via the addition of an order_by() method to the function (fn) instance. Refs #2094.
    • Fix prefetch() bug, where related "backref" instances were marked as dirty, even though they had no changes. Fixes #2091.
    • Support LIMIT 0. Previously a limit of 0 would be translated into effectively an unlimited query on MySQL. References #2084.
    • Support indexing into arrays using expressions with Postgres array fields. References #2085.
    • Ensure postgres introspection methods return the columns for multi-column indexes in the correct order. Fixes #2104.
    • Add support for arrays of UUIDs to postgres introspection.
    • Fix introspection of columns w/capitalized table names in postgres (#2110).
    • Fix to ensure correct exception is raised in SqliteQueueDatabase when iterating over cursor/result-set.
    • Fix bug comparing subquery against a scalar value. Fixes #2118.
    • Fix issue resolving composite primary-keys that include foreign-keys when building the model-graph. Fixes #2115.
    • Allow model-classes to be passed as arguments, e.g., to a table function. Refs #2131.
    • Ensure postgres JSONField.concat() accepts expressions as arguments.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.13.1(Dec 6, 2019)

    Fix a regression when specifying keyword arguments to the atomic() or transaction() helper methods. Note: this only occurs if you were using Sqlite and were explicitly setting the lock_type= parameter.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.13.0(Dec 6, 2019)

    CockroachDB support added

    This will be a notable release as it adds support for CockroachDB, a distributed, horizontally-scalable SQL database.

    Other features and fixes

    • Allow FOR UPDATE clause to specify one or more tables (FOR UPDATE OF...).
    • Support for Postgres LATERAL join.
    • Properly wrap exceptions raised during explicit commit/rollback in the appropriate peewee-specific exception class.
    • Capture original exception object and expose it as exc.orig on the wrapped exception.
    • Properly introspect SMALLINT columns in Postgres schema reflection.
    • More flexible handling of passing database-specific arguments to atomic() and transaction() context-manager/decorator.
    • Fix non-deterministic join ordering issue when using the filter() API across several tables (#2063).

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.12.0(Nov 24, 2019)

    • Bulk insert (insert_many() and insert_from()) will now return the row count instead of the last insert ID. If you are using Postgres, peewee will continue to return a cursor that provides an iterator over the newly-inserted primary-key values by default. This behavior is being retained by default for compatibility. Postgres users can simply specify an empty returning() call to disable the cursor and retrieve the rowcount instead.
    • Migration extension now supports altering a column's data-type, via the new alter_column_type() method.
    • Added Database.is_connection_usable() method, which attempts to look at the status of the underlying DB-API connection to determine whether the connection is usable.
    • Common table expressions include a materialized parameter, which can be used to control Postgres' optimization fencing around CTEs.
    • Added BloomFilter.from_buffer() method for populating a bloom-filter from the output of a previous call to the to_buffer() method.
    • Fixed APSW extension's commit() and rollback() methods to no-op if the database is in auto-commit mode.
    • Added generate_always= option to the IdentityField (defaults to False).

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.11.2(Sep 24, 2019)

  • 3.11.1(Sep 23, 2019)

  • 3.11.0(Sep 19, 2019)

    • Fixes #1991. This particular issue involves joining 3 models together in a chain, where the outer two models are empty. Previously peewee would make the middle model an empty model instance (since a link might be needed from the source model to the outermost model). But since both were empty, it is more correct to make the intervening model a NULL value on the foreign-key field rather than an empty instance.
    • An unrelated fix came out of the work on #1991 where hashing a model whose primary-key happened to be a foreign-key could trigger the FK resolution query. This patch fixes the Model._pk and get_id() interfaces so they no longer introduce the possibility of accidentally resolving the FK.
    • Allow Field.contains(), startswith() and endswith() to compare against another column-like object or expression.
    • Workaround for MySQL prior to 8 and MariaDB handling of union queries inside of parenthesized expressions (like IN).
    • Be more permissive in letting invalid values be stored in a field whose type is INTEGER or REAL, since Sqlite allows this.
    • TimestampField resolution cleanup. Now values 0 and 1 will resolve to a timestamp resolution of 1 second. Values 2-6 specify the number of decimal places (hundredths to microsecond), or alternatively the resolution can still be provided as a power of 10, e.g. 10, 1000 (millisecond), 1e6 (microsecond).
    • When self-referential foreign-keys are inherited, the foreign-key on the subclass will also be self-referential (rather than pointing to the parent model).
    • Add TSV import/export option to the dataset extension.
    • Add item interface to the dataset.Table class for doing primary-key lookup, assignment, or deletion.
    • Extend the mysql ReconnectMixin helper to work with mysql-connector.
    • Fix mapping of double-precision float in postgres schema reflection. Previously it mapped to single-precision, now it correctly uses a double.
    • Fix issue where PostgresqlExtDatabase and MySQLConnectorDatabase did not respect the autoconnect setting.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.10.0(Aug 3, 2019)

    • Add a helper to playhouse.mysql_ext for creating Match full-text search expressions.
    • Added date-part properties to TimestampField for accessing the year, month, day, etc., within a SQL expression.
    • Added to_timestamp() helper for DateField and DateTimeField that produces an expression returning a unix timestamp.
    • Add autoconnect parameter to Database classes. This parameter defaults to True and is compatible with previous versions of Peewee, in which executing a query on a closed database would open a connection automatically. To make it easier to catch inconsistent use of the database connection, this behavior can now be disabled by specifying autoconnect=False, making an explicit call to Database.connect() needed before executing a query.
    • Added database-agnostic interface for obtaining a random value.
    • Allow isolation_level to be specified when initializing a Postgres db.
    • Allow hybrid properties to be used on model aliases. Refs #1969.
    • Support aggregates with FILTER predicates on the latest Sqlite.

    Changes

    • More aggressively slot row values into the appropriate field when building objects from the database cursor (rather than using whatever cursor.description tells us, which is buggy in older Sqlite).
    • Be more permissive in what we accept in the insert_many() and insert() methods.
    • When implicitly joining a model with multiple foreign-keys, choose the foreign-key whose name matches that of the related model. Previously, this would have raised a ValueError stating that multiple FKs existed.
    • Improved date truncation logic for Sqlite and MySQL to make more compatible with Postgres' date_trunc() behavior. Previously, truncating a datetime to month resolution would return '2019-08' for example. As of 3.10.0, the Sqlite and MySQL date_trunc implementation returns a full datetime, e.g. '2019-08-01 00:00:00'.
    • Apply slightly different logic for casting JSON values with Postgres. Previously, Peewee just wrapped the value in the psycopg2 Json() helper. In this version, Peewee now dumps the json to a string and applies an explicit cast to the underlying JSON data-type (e.g. json or jsonb).

    Bug fixes

    • Save hooks can now be called for models without a primary key.
    • Fixed bug in the conversion of Python values to JSON when using Postgres.
    • Fix for differentiating empty values from NULL values in model_to_dict.
    • Fixed a bug referencing primary-key values that required some kind of conversion (e.g., a UUID). See #1979 for details.
    • Add small jitter to the pool connection timestamp to avoid issues when multiple connections are checked-out at the same exact time.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.9.6(Jun 3, 2019)

    • Support nesting the Database instance as a context-manager. The outermost block will handle opening and closing the connection along with wrapping everything in a transaction. Nested blocks will use savepoints.
    • Add new session_start(), session_commit() and session_rollback() interfaces to the Database object to support using transactional controls in situations where a context-manager or decorator is awkward.
    • Fix error that would arise when attempting to do an empty bulk-insert.
    • Set isolation_level=None in SQLite connection constructor rather than afterwards using the setter.
    • Add create_table() method to Select query to implement CREATE TABLE AS.
    • Cleanup some declarations in the Sqlite C extension.
    • Add new example showing how to implement Reddit's ranking algorithm in SQL.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.9.5(Apr 26, 2019)

    • Added small helper for setting timezone when using Postgres.
    • Improved SQL generation for VALUES clause.
    • Support passing resolution to TimestampField as a power-of-10.
    • Small improvements to INSERT queries when the primary-key is not an auto-incrementing integer, but is generated by the database server (eg uuid).
    • Cleanups to virtual table implementation and python-to-sqlite value conversions.
    • Fixed bug related to binding previously-unbound models to a database using a context manager, #1913.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.9.4(Apr 14, 2019)

    • Add Model.bulk_update() method for bulk-updating fields across multiple model instances. Docs.
    • Add lazy_load parameter to ForeignKeyField. When initialized with lazy_load=False, the foreign-key will not use an additional query to resolve the related model instance. Instead, if the related model instance is not available, the underlying FK column value is returned (behaving like the "_id" descriptor).
    • Added Model.truncate_table() method.
    • The reflection and pwiz extensions now attempt to be smarter about converting database table and column names into snake-case. To disable this, you can set snake_case=False when calling the Introspector.introspect() method or use the -L (legacy naming) option with the pwiz script.
    • Bulk insert via insert_many() no longer require specification of the fields argument when the inserted rows are lists/tuples. In that case, the fields will be inferred to be all model fields except any auto-increment id.
    • Add DatabaseProxy, which implements several of the Database class context managers. This allows you to reference some of the special features of the database object without directly needing to initialize the proxy first.
    • Add support for window function frame exclusion and added built-in support for the GROUPS frame type.
    • Add support for chaining window functions by extending a previously-declared window function.
    • Playhouse Postgresql extension TSVectorField.match() method supports an additional argument plain, which can be used to control the parsing of the TS query.
    • Added very minimal JSONField to the playhouse MySQL extension.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.9.3(Mar 23, 2019)

    • Added cross-database support for NULLS FIRST/LAST when specifying the ordering for a query. Previously this was only supported for Postgres. Peewee will now generate an equivalent CASE statement for Sqlite and MySQL.
    • Added EXCLUDED helper for referring to the EXCLUDED namespace used with INSERT...ON CONFLICT queries, when referencing values in the conflicting row data.
    • Added helper method to the model Metadata class for setting the table name at run-time. Setting the Model._meta.table_name directly may have appeared to work in some situations, but could lead to subtle bugs. The new API is Model._meta.set_table_name().
    • Enhanced helpers for working with Peewee interactively, see doc.
    • Fix cache invalidation bug in DataSet that was originally reported on the sqlite-web project.
    • New example script implementing a hexastore.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.9.2(Mar 6, 2019)

  • 3.9.1(Mar 6, 2019)

  • 3.9.0(Mar 6, 2019)

    New and improved stuff

    • Added new document describing how to use peewee interactively.
    • Added convenience functions for generating model classes from a pre-existing database, printing model definitions and printing CREATE TABLE sql for a model. See the "use peewee interactively" section for details.
    • Added a __str__ implementation to all Query subclasses which converts the query to a string and interpolates the parameters.
    • Improvements to sqlite_ext.JSONField regarding the serialization of data, as well as the addition of options to override the JSON serialization and de-serialization functions.
    • Added index_type parameter to Field
    • Added DatabaseProxy, which allows one to use database-specific decorators with an uninitialized Proxy object. See #1842 for discussion. Recommend that you update any usage of Proxy for deferring database initialization to use the new DatabaseProxy class instead.
    • Added support for INSERT ... ON CONFLICT when the conflict target is a partial index (e.g., contains a WHERE clause). The OnConflict and on_conflict() APIs now take an additional conflict_where parameter to represent the WHERE clause of the partial index in question. See #1860.
    • Enhanced the playhouse.kv extension to use efficient upsert for all database engines. Previously upsert was only supported for sqlite and mysql.
    • Re-added the orwhere() query filtering method, which will append the given expressions using OR instead of AND. See #391 for old discussion.
    • Added some new examples to the examples/ directory
    • Added select_from() API for wrapping a query and selecting one or more columns from the wrapped subquery. Docs.
    • Added documentation on using row values.
    • Removed the (defunct) "speedups" C extension, which as of 3.8.2 only contained a barely-faster function for quoting entities.

    Bugfixes

    • Fix bug in SQL generation when there was a subquery that used a common table expressions.
    • Enhanced prefetch() and fixed bug that could occur when mixing self-referential foreign-keys and model aliases.
    • MariaDB 10.3.3 introduces backwards-incompatible changes to the SQL used for upsert. Peewee now introspects the MySQL server version at connection time to ensure proper handling of version-specific features. See #1834 for details.
    • Fixed bug where TimestampField would treat zero values as None when reading from the database.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.8.2(Jan 17, 2019)

    Backwards-incompatible changes

    • The default row-type for INSERT queries executed with a non-default RETURNING clause has changed from tuple to Model instances. This makes INSERT behavior consistent with UPDATE and DELETE queries that specify a RETURNING clause. To revert back to the old behavior, just append a call to .tuples() to your INSERT ... RETURNING query.
    • Removing support for the table_alias model Meta option. Previously, this attribute could be used to specify a "vanity" alias for a model class in the generated SQL. As a result of some changes to support more robust UPDATE and DELETE queries, supporting this feature will require some re-working. As of the 3.8.0 release, it was broken and resulted in incorrect SQL for UPDATE queries, so now it is removed.

    New features

    • Added playhouse.shortcuts.ReconnectMixin, which can be used to implement automatic reconnect under certain error conditions (notably the MySQL error 2006 - server has gone away).

    Bugfixes

    • Fix SQL generation bug when using an inline window function in the ORDER BY clause of a query.
    • Fix possible zero-division in user-defined implementation of BM25 ranking algorithm for SQLite full-text search.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.8.1(Jan 7, 2019)

    Changes

    • Remove minimum passphrase restrictions in SQLCipher integration.

    Bugfixes

    • Support inheritance of ManyToManyField instances.
    • Ensure operator overloads are invoked when generating filter expressions.
    • Fix incorrect scoring in Sqlite BM25, BM25f and Lucene ranking algorithms.
    • Support string field-names in data dictionary when performing an ON CONFLICT ... UPDATE query, which allows field-specific conversions to be applied. References #1815.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.8.0(Dec 16, 2018)

    New features

    • Postgres BinaryJSONField now supports has_key(), concat() and remove() methods (though remove may require pg10+).
    • Add python_value() method to the SQL-function helper fn, to allow specifying a custom function for mapping database values to Python values.

    Changes

    • Better support for UPDATE ... FROM queries, and more generally, more robust support for UPDATE and RETURNING clauses. This means that the QualifiedNames helper is no longer needed for certain types of queries.
    • The SqlCipherDatabase no longer accepts a kdf_iter parameter. To configure the various SQLCipher encryption settings, specify the setting values as pragmas when initializing the database.
    • Introspection will now, by default, only strip "_id" from introspected column names if those columns are foreign-keys. See #1799 for discussion.
    • Allow UUIDField and BinaryUUIDField to accept hexadecimal UUID strings as well as raw binary UUID bytestrings (in addition to UUID instances, which are already supported).
    • Allow ForeignKeyField to be created without an index.
    • Allow multiple calls to cast() to be chained (#1795).
    • Add logic to ensure foreign-key constraint names that exceed 64 characters are truncated using the same logic as is currently in place for long indexes.
    • ManyToManyField supports foreign-keys to fields other than primary-keys.
    • When linked against SQLite 3.26 or newer, support SQLITE_CONSTRAINT to designate invalid queries against virtual tables.
    • SQL-generation changes to aid in supporting using queries within expressions following the SELECT statement.

    Bugfixes

    • Fixed bug in order_by_extend(), thanks @nhatHero.
    • Fixed bug where the DataSet CSV import/export did not support non-ASCII characters in Python 3.x.
    • Fixed bug where model_to_dict would attempt to traverse explicitly disabled foreign-key backrefs (#1785).
    • Fixed bug when attempting to migrate SQLite tables that have a field whose column-name begins with "primary_".
    • Fixed bug with inheriting deferred foreign-keys.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.7.1(Oct 5, 2018)

    New features

    • Added table_settings model Meta option, which should be a list of strings specifying additional options for CREATE TABLE, which are placed after the closing parentheses.
    • Allow specification of on_update and on_delete behavior for many-to-many relationships when using ManyToManyField.

    Bugfixes

    • Fixed incorrect SQL generation for Postgresql ON CONFLICT clause when the conflict_target is a named constraint (rather than an index expression). This introduces a new keyword-argument to the on_conflict() method: conflict_constraint, which is currently only supported by Postgresql. Refs issue #1737.
    • Fixed incorrect SQL for sub-selects used on the right side of IN expressions. Previously the query would be assigned an alias, even though an alias was not needed.
    • Fixed incorrect SQL generation for Model indexes which contain SQL functions as indexed columns.
    • Fixed bug in the generation of special queries used to perform operations on SQLite FTS5 virtual tables.
    • Allow frozenset to be correctly parameterized as a list of values.
    • Allow multi-value INSERT queries to specify columns as a list of strings.
    • Support CROSS JOIN for model select queries.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.7.0(Sep 6, 2018)

    Backwards-incompatible changes

    • Pool database close_all() method renamed to close_idle() to better reflect the actual behavior.
    • Databases will now raise InterfaceError when connect() or close() are called on an uninitialized, deferred database object.

    New features

    • Add methods to the migrations extension to support adding and dropping table constraints.
    • Add Model.bulk_create() method for bulk-inserting unsaved model instances.
    • Add close_stale() method to the connection pool to support closing stale connections.
    • The FlaskDB class in playhouse.flask_utils now accepts a model_class parameter, which can be used to specify a custom base-class for models.

    Bugfixes

    • Parentheses were not added to subqueries used in function calls with more than one argument.
    • Fixed bug when attempting to serialize many-to-many fields which were created initially with a DeferredThroughModel, see #1708.
    • Fixed bug when using the Postgres ArrayField with an array of BlobField.
    • Allow Proxy databases to be used as a context-manager.
    • Fixed bug where the APSW driver was referring to the SQLite version from the standard library sqlite3 driver, rather than from apsw.
    • Reflection library attempts to wrap server-side column defaults in quotation marks if the column data-type is text/varchar.
    • Missing import in migrations library, which would cause errors when attempting to add indexes whose name exceeded 64 chars.
    • When using the Postgres connection pool, ensure any open/pending transactions are rolled-back when the connection is recycled.
    • Even more changes to the setup.py script. In this case I've added a helper function which will reliably determine if the SQLite3 extensions can be built. This follows the approach taken by the Python YAML package.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.6.4(Aug 7, 2018)

    Take a whole new approach, following what simplejson does. Allow the build_ext command class to fail, and retry without extensions in the event we run into issues building extensions. References #1676.

    View commits

    Source code(tar.gz)
    Source code(zip)
  • 3.6.3(Jul 18, 2018)

  • 3.6.2(Jul 18, 2018)

  • 3.6.1(Jul 17, 2018)

a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 8.7k Oct 25, 2021
Piccolo - A fast, user friendly ORM and query builder which supports asyncio.

A fast, user friendly ORM and query builder which supports asyncio.

null 566 Oct 20, 2021
A curated list of awesome tools for SQLAlchemy

Awesome SQLAlchemy A curated list of awesome extra libraries and resources for SQLAlchemy. Inspired by awesome-python. (See also other awesome lists!)

Hong Minhee (洪 民憙) 2.3k Oct 21, 2021
The ormar package is an async mini ORM for Python, with support for Postgres, MySQL, and SQLite.

python async mini orm with fastapi in mind and pydantic validation

null 674 Oct 23, 2021
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.

Orator The Orator ORM provides a simple yet beautiful ActiveRecord implementation. It is inspired by the database part of the Laravel framework, but l

Sébastien Eustace 1.3k Oct 17, 2021
Pony Object Relational Mapper

Downloads Pony Object-Relational Mapper Pony is an advanced object-relational mapper. The most interesting feature of Pony is its ability to write que

null 2.7k Oct 15, 2021
A very simple CRUD class for SQLModel! ✨

Base SQLModel A very simple CRUD class for SQLModel! ✨ Inspired on: Full Stack FastAPI and PostgreSQL - Base Project Generator FastAPI Microservices I

Marcelo Trylesinski 12 Oct 22, 2021
SQLModel is a library for interacting with SQL databases from Python code, with Python objects.

SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.

Sebastián Ramírez 5.5k Oct 24, 2021
A pure Python Database Abstraction Layer

pyDAL pyDAL is a pure Python Database Abstraction Layer. It dynamically generates the SQL/noSQL in realtime using the specified dialect for the databa

null 397 Oct 23, 2021
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.6k Oct 24, 2021
Pydantic model support for Django ORM

Pydantic model support for Django ORM

Jordan Eremieff 189 Oct 23, 2021
Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.6k Oct 22, 2021
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

null 1.7k Oct 23, 2021
A Python Library for Simple Models and Containers Persisted in Redis

Redisco Python Containers and Simple Models for Redis Description Redisco allows you to store objects in Redis. It is inspired by the Ruby library Ohm

sebastien requiem 434 Oct 1, 2021
Rich Python data types for Redis

Created by Stephen McDonald Introduction HOT Redis is a wrapper library for the redis-py client. Rather than calling the Redis commands directly from

Stephen McDonald 273 Sep 2, 2021
Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB

Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic.

Roman 277 Oct 18, 2021
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.1k Oct 18, 2021
MongoEngine flask extension with WTF model forms support

Flask-MongoEngine Info: MongoEngine for Flask web applications. Repository: https://github.com/MongoEngine/flask-mongoengine About Flask-MongoEngine i

MongoEngine 788 Oct 14, 2021