Python helpers for using SQLAlchemy with Tornado.

Overview

tornado-sqlalchemy

https://travis-ci.org/siddhantgoel/tornado-sqlalchemy.svg?branch=stable https://readthedocs.org/projects/tornado-sqlalchemy/badge/?version=latest

Python helpers for using SQLAlchemy with Tornado.

Installation

$ pip install tornado-sqlalchemy

In case you prefer installing from the Github repository, please note that master is the development branch so stable is what you should be installing from.

Usage

from tornado.gen import coroutine
from tornado.web import Application, RequestHandler
from tornado_sqlalchemy import as_future, SessionMixin, SQLAlchemy

class NativeCoroutinesRequestHandler(SessionMixin, RequestHandler):
    async def get(self):
        with self.make_session() as session:
            count = await as_future(session.query(User).count)

        self.write('{} users so far!'.format(count))

class GenCoroutinesRequestHandler(SessionMixin, RequestHandler):
    @coroutine
    def get(self):
        with self.make_session() as session:
            count = yield as_future(session.query(User).count)

        self.write('{} users so far!'.format(count))

class SynchronousRequestHandler(SessionMixin, RequestHandler):
    def get(self):
        with self.make_session() as session:
            count = session.query(User).count()

        self.write('{} users so far!'.format(count))

handlers = (
   (r'/native-coroutines', NativeCoroutinesRequestHandler),
   (r'/gen-coroutines', GenCoroutinesRequestHandler),
   (r'/sync', SynchronousRequestHandler),
)

app = Application(
   handlers,
   db=SQLAlchemy('postgres://user:password@host/database')
)

Documentation

Documentation is available at Read The Docs.

Development

Please make sure you have Python 3.5+ and Poetry installed.

Since we run tests against multiple databases (currently MySQL, PostgreSQL, and SQLite), we use docker-compose to make our lives easier.

  1. Git clone the repository - git clone https://github.com/siddhantgoel/tornado-sqlalchemy
  2. Install the packages required for development - poetry install
  3. Ensure that the MySQL and PostgreSQL services (containers) are up - docker-compose up -d
  4. That should basically be it. You should now be able to run the test suite - poetry run py.test tests/.
Comments
  • get session gets idle (by timeout) when using tornado_sqlalchemy make_session_factory

    get session gets idle (by timeout) when using tornado_sqlalchemy make_session_factory

    When creating a connection pool using tornado_sqlalchemy.make_session_factory there is a default timeout of 30 min. this timeout waits for thread notification on the pool, which in case async code is used will block execution for that time (and in extreme scenarios will create a deadlock), and will break the async model.

    probably need to create some async session factory.

    in order to reproduce define pool_size as 1 an run several concurent request

    needs clarification 
    opened by itamarf84 10
  • Sharing session context / Repository pattern

    Sharing session context / Repository pattern

    Hi,

    For my project we heavily use the repository pattern to abstract database queries. These repositories use the session to connect to the database, and are usually injected into the handlers.

    The easiest solution to make sure the repositories use the correct session is to create them within the context of the request. So something like this.

        def get(self, testId):
            with self.make_session() as session:
                self.repository = TestRepository(session)
    

    However, I think it would be a better separation of concerns if the handlers aren't tasked with object creation themselves. In order to do this it would be necessary to somehow share the current context between the handlers and other objects (e.g. the repository). I think this might be possible using e.g. Tornado's stack context (see Context local for an examle).

    What are your thoughts on this?

    opened by bouke-nederstigt 10
  • Propagate kwargs from SessionFactory constructor to create_engine

    Propagate kwargs from SessionFactory constructor to create_engine

    Kwargs is created empty for setting up the sqlalchemy create_engine this makes impossible to someone add specific settings for creating the engine. An alternative would be to propagate from session factory kwargs. By doing that, it should allow any user to set all possible parameters like in: https://docs.sqlalchemy.org/en/latest/core/engines.html#engine-creation-api

    Issue reference: https://github.com/siddhantgoel/tornado-sqlalchemy/blob/6a3f82aa687b848a518c7b3d5475cb34e1a0b08f/tornado_sqlalchemy/init.py#L135

    opened by lincrosenbach 8
  • avoid commit on select queries

    avoid commit on select queries

    On services where only fetching from database, commit it not necessary. Can you add some flag on context manager to avoid commit on that kind of session. I think that would be optimization.

    opened by WebDevelopNemanja 7
  • Tornado V5.0

    Tornado V5.0

    Hi,

    I've been having a great time using this package.

    I notice Tornado V5.0 was recently releases and it is moving towards the asyncio compatability.

    Specifically:

    "On Python 3, the IOLoop is always a wrapper around the asyncio event loop, and asyncio.Future and asyncio.Task are used instead of their Tornado counterparts. This means that libraries based on asyncio can be mixed relatively seamlessly with those using Tornado. While care has been taken to minimize the disruption from this change, code changes may be required for compatibility with Tornado 5.0, as detailed in the following section."

    I am wondering if this will conflict with this packages async behavior per docs:

    "...please note that currently we don’t support the native asyncio.Future objects included in Python 3. So if your request handlers are async def-ed, then calling await on the future that as_future returns would likely not work. But the good news is that a later version of this library should support this use case as well."

    http://www.tornadoweb.org/en/stable/releases/v5.0.0.html

    opened by mgcdanny 5
  • Should we alway set engine_options.pool_timeout to zero?

    Should we alway set engine_options.pool_timeout to zero?

    engine_options = {
           'pool_size': 2,
           'pool_timeout': 1,
           'max_overflow': 0
     },
    

    Once the concurrency exceed pool_size+max_overflow, the program will deadlock.

    I think the reason is the new-coming request blocked at waiting for a connection,

    Since tornado is single-thread model, this kind blocking will cause deadlock (util timeout ).

    Below is what I got from gdb, the program is waiting for a connection.

    Traceback (most recent call first):
      File "/usr/lib/python3.7/threading.py", line 300, in wait
        gotit = waiter.acquire(True, timeout)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/util/queue.py", line 162, in get
        self.not_empty.wait(remaining)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 117, in _do_get
        return self._pool.get(wait, self._timeout)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
        rec = pool._do_get()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 760, in _checkout
        fairy = _ConnectionRecord.checkout(pool)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect
        return _ConnectionFairy._checkout(self)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2276, in _wrap_pool_connect
        return fn()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2242, in _contextual_connect
        self._wrap_pool_connect(self.pool.connect, None),
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 432, in _connection_for_bind
        conn = bind._contextual_connect()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 420, in _connection_for_bind
        conn = self._parent._connection_for_bind(bind, execution_options)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 313, in connection
        return self._connection_for_bind(bind, execution_options)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1604, in _connections_for_states
        connection = uowtransaction.transaction.connection(base_mapper)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 374, in _organize_states_for_save
        base_mapper, uowtransaction, states
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 213, in save_obj
        ) in _organize_states_for_save(base_mapper, states, uowtransaction):
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
        uow,
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
        rec.execute(self)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2577, in _flush
        flush_context.execute()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2479, in flush
        self._flush(objects)
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 482, in _prepare_impl
        self.session.flush()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 503, in commit
        self._prepare_impl()
      File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1036, in commit
        self.transaction.commit()
      File "examples/multi_databases.py", line 63, in get
        session.commit()
      File "/usr/lib/python3.7/site-packages/tornado/web.py", line 1699, in _execute
        result = await result
    
    
      <built-in method run of Context object at remote 0x7fc857aeb910>
      File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
        self._context.run(self._callback, *self._args)
      File "/usr/lib/python3.7/asyncio/base_events.py", line 1771, in _run_once
        handle._run()
      File "/usr/lib/python3.7/asyncio/base_events.py", line 534, in run_forever
        self._run_once()
      File "/usr/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 148, in start
        self.asyncio_loop.run_forever()
      File "examples/multi_databases.py", line 106, in <module>
        IOLoop.current().start()
    
    opened by dingyaguang117 4
  • TypeError: 'list' object is not callable

    TypeError: 'list' object is not callable

    when i use

     yield as_future(self.session.query(Prop.title, Prop.image_path, Prop.discount_price).all())
    

    meet this error

    Traceback (most recent call last):
      File "/home/m2/anaconda2/envs/py36/lib/python3.6/site-packages/tornado/web.py", line 1592, in _execute
        result = yield result
      File "/home/m2/anaconda2/envs/py36/lib/python3.6/site-packages/tornado/gen.py", line 1133, in run
        value = future.result()
      File "/home/m2/anaconda2/envs/py36/lib/python3.6/site-packages/tornado/gen.py", line 1141, in run
        yielded = self.gen.throw(*exc_info)
      File "/home/m2/gitdoc/server/handlers/property.py", line 44, in get
        .query(Prop.title, Prop.image_path, Prop.discount_price)
      File "/home/m2/anaconda2/envs/py36/lib/python3.6/site-packages/tornado/gen.py", line 1133, in run
        value = future.result()
      File "/home/m2/anaconda2/envs/py36/lib/python3.6/concurrent/futures/thread.py", line 56, in run
        result = self.fn(*self.args, **self.kwargs)
    TypeError: 'list' object is not callable
    
    

    but self.session.query(Prop.title, Prop.image_path, Prop.discount_price).all() is fine

    opened by M2shad0w 4
  • TypeError: 'NoneType' object is not callable when doing session.add

    TypeError: 'NoneType' object is not callable when doing session.add

    From a Tornado handler defined as @coroutine it works fine with a simple count-query as in example, but when only changing to do a session.add instead (insert) I get a TypeError on the session.add-row. MyOrder is a vanilla SQLAlchemy model.

    new_obj = MyOrder(id=123, sym="test", type="blabl")
    
    with self.make_session() as session:
          yield as_future(session.add(new_obj))
    

    Am I supposed to add objects in some other way or is it a bug in tornado-sqlalchemy?

    opened by gbgmian 3
  • Upgrade Twine?

    Upgrade Twine?

    opened by brainwane 3
  • Optional use_native_unicode

    Optional use_native_unicode

    Currently the option use_native_unicode is always added in the SessionFactory _setup method. However this config option isn't supported for mysql engines. It results in

    TypeError: Invalid argument(s) 'use_native_unicode' sent to create_engine(), using configuration MySQLDialect_mysqldb/QueuePool/Engine

    I was able to solve the problem by creating my own factory that doesn't use the use_native_unicode option. I think it might be nice to only make this option available to pgsql engines to support other engines out of the box.

    Let me know what you think. I don't mind implementing it.

    opened by bouke-nederstigt 3
  • session insert record error

    session insert record error

    when insert a record to database i got "'NoneType' object is not callable" ` import tornado.web from tornado.gen import coroutine from tornado_sqlalchemy import as_future from database.tables import *

    class UploadFileHandler(BaseHandler):

    @coroutine def post(self): _file = self.request.files['upload'][0] _filename = _file["filename"].split(".")[0] _extension = _file["filename"].split(".")[1] _mobile = '12345678901' upload_file = UploadFile(mobile=_mobile, body=_file['body'], filename=_filename, extension=_extension) with self.make_session() as session: session.add(upload_file) yield as_future(session.commit()) ` and the error log is

    File "/usr/local/lib/python3.5/dist-packages/tornado/web.py", line 1592, in _execute result = yield result File "/usr/local/lib/python3.5/dist-packages/tornado/gen.py", line 1133, in run value = future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 293, in result raise self._exception File "/usr/local/lib/python3.5/dist-packages/tornado/gen.py", line 1141, in run yielded = self.gen.throw(*exc_info) File "/srv/www/api/upload_file.py", line 20, in post yield as_future(session.commit()) File "/usr/local/lib/python3.5/dist-packages/tornado/gen.py", line 1133, in run value = future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 293, in result raise self._exception File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run result = self.fn(*self.args, **self.kwargs) TypeError: 'NoneType' object is not callable

    opened by tonyshare 2
  • Is it OK that Session creation/cleanup is done in main thread?

    Is it OK that Session creation/cleanup is done in main thread?

    Currently session creation/commit/close is done in tornado main thread. Although I am not sure how sqlalchemy works exactly, I feel that there can be blocking IO operation in session create/commit/close functions. At least, in commit function, framework should communicate with DB via TCP connection if there is any thing dirty.

    I think it might be better to do these operations in thread pool. Is there any reason why you do in in main thread? Is it guaranteed that sqlalchemy does not have any IO operation while session creation and close as long as users commit actively?

    needs clarification 
    opened by funcodeio 11
  • add scoped_session support

    add scoped_session support

    I'm trying to solve the problem below

    db = SQLAlchemy()
    
    class User(db.Model):
        __tablename__ = 'users'
    
        id = Column(BigInteger, primary_key=True)
        username = Column(String(64), unique=True)
    
        def __init__(self, username):
            self.username = username
    
        # for every method, the session argument is necessary but tedious,
        # if there are more nest calls, we had to pass the session argument again and again 
        @classmethod
        def count(cls, session):  
            return session.query(cls).count()
    
    

    so, I add scoped_session (learned from flask_sqlalchemy), which will shorten the statement.

    This is the improved way to write it

    class User(db.Model):
        __tablename__ = 'users'
    
        id = Column(BigInteger, primary_key=True)
        username = Column(String(64), unique=True)
    
        def __init__(self, username):
            self.username = username
    
        @classmethod
        def count(cls):
            return db.session.query(cls).count()
    
    

    The problem is this enhancement only support python3.7+, because contextvars is used.

    opened by dingyaguang117 12
  • A convenient way to get a session from a member method of Model

    A convenient way to get a session from a member method of Model

    like:

    class User(Base):
        __tablename__ = 'users'
    
        id = Column(BigInteger, primary_key=True)
        username = Column(String(64), unique=True)
    
        def __init__(self, username):
            self.username = username
      
        @classmethod
        def get_jack(cls):
            return cls.session.query(cls).filter(cls.name=='jack').first()
    
    
    enhancement 
    opened by dingyaguang117 5
Owner
Siddhant Goel
Software Developer πŸ‘¨πŸ»β€πŸ’»
Siddhant Goel
Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.9k Jan 9, 2023
SQLAlchemy support for aiohttp.

aiohttp-sqlalchemy SQLAlchemy 1.4 / 2.0 support for AIOHTTP. The library provides the next features: initializing asynchronous sessions through a midd

Ruslan Ilyasovich Gilfanov 5 Dec 11, 2022
Global base classes for Pyramid SQLAlchemy applications.

pyramid_basemodel pyramid_basemodel is a thin, low level package that provides an SQLAlchemy declarative Base and a thread local scoped Session that c

Grzegorz ŚliwiΕ„ski 15 Jan 3, 2023
Sqlalchemy seeder that supports nested relationships.

sqlalchemyseed Sqlalchemy seeder that supports nested relationships. Supported file types json yaml csv Installation Default installation pip install

Jedy Matt Tabasco 10 Aug 13, 2022
SQLModel is a library for interacting with SQL databases from Python code, with Python objects.

SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.

SebastiΓ‘n RamΓ­rez 9.1k Dec 31, 2022
A pure Python Database Abstraction Layer

pyDAL pyDAL is a pure Python Database Abstraction Layer. It dynamically generates the SQL/noSQL in realtime using the specified dialect for the databa

null 440 Nov 13, 2022
Rich Python data types for Redis

Created by Stephen McDonald Introduction HOT Redis is a wrapper library for the redis-py client. Rather than calling the Redis commands directly from

Stephen McDonald 281 Nov 10, 2022
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.9k Dec 30, 2022
A Python Library for Simple Models and Containers Persisted in Redis

Redisco Python Containers and Simple Models for Redis Description Redisco allows you to store objects in Redis. It is inspired by the Ruby library Ohm

sebastien requiem 436 Nov 10, 2022
Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB

Beanie - is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic.

Roman 993 Jan 3, 2023
The ormar package is an async mini ORM for Python, with support for Postgres, MySQL, and SQLite.

python async mini orm with fastapi in mind and pydantic validation

null 1.2k Jan 5, 2023
Solrorm : A sort-of solr ORM for python

solrorm : A sort-of solr ORM for python solrpy - deprecated solrorm - currently in dev Usage Cores The first step to interact with solr using solrorm

Aj 1 Nov 21, 2021
Python 3.6+ Asyncio PostgreSQL query builder and model

windyquery - A non-blocking Python PostgreSQL query builder Windyquery is a non-blocking PostgreSQL query builder with Asyncio. Installation $ pip ins

null 67 Sep 1, 2022
A PostgreSQL or SQLite orm for Python

Prom An opinionated lightweight orm for PostgreSQL or SQLite. Prom has been used in both single threaded and multi-threaded environments, including en

Jay Marcyes 18 Dec 1, 2022
ORM for Python for PostgreSQL.

New generation (or genius) ORM for Python for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
A new ORM for Python specially for PostgreSQL

A new ORM for Python specially for PostgreSQL. Fully-typed for any query with Pydantic and auto-model generation, compatible with any sync or async driver

Yan Kurbatov 3 Apr 13, 2022
Fully Automated YouTube Channel ▢️with Added Extra Features.

Fully Automated Youtube Channel β–’β–ˆβ–€β–€β–ˆ β–ˆβ–€β–€β–ˆ β–€β–€β–ˆβ–€β–€ β–€β–€β–ˆβ–€β–€ β–ˆβ–‘β–‘β–ˆ β–ˆβ–€β–€β–„ β–ˆβ–€β–€ β–ˆβ–€β–€β–ˆ β–’β–ˆβ–€β–€β–„ β–ˆβ–‘β–‘β–ˆ β–‘β–‘β–ˆβ–‘β–‘ β–‘β–’β–ˆβ–‘β–‘ β–ˆβ–‘β–‘β–ˆ β–ˆβ–€β–€β–„ β–ˆβ–€β–€ β–ˆβ–„β–„β–€ β–’β–ˆβ–„β–„β–ˆ β–€β–€β–€β–€ β–‘β–‘β–€β–‘β–‘ β–‘β–’β–ˆβ–‘β–‘ β–‘β–€β–€β–€ β–€β–€β–€β–‘

sam-sepiol 249 Jan 2, 2023
PEP-484 typing stubs for SQLAlchemy 1.4 and SQLAlchemy 2.0

SQLAlchemy 2 Stubs These are PEP-484 typing stubs for SQLAlchemy 1.4 and 2.0. They are released concurrently along with a Mypy extension which is desi

SQLAlchemy 139 Dec 30, 2022
Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

sqlalchemy-databricks A SQLAlchemy Dialect for Databricks using the officially s

Flynn 19 Nov 3, 2022
Hotwired/Turbo Django response helpers

This package provides helpers for server-side rendering of Hotwired/Turbo streams and frames. Disclaimer: the Hotwired/Turbo client libraries are, at

Hotwire for Django 66 Apr 7, 2022