Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed.

Overview

Tornado Web Server

Join the chat at https://gitter.im/tornadoweb/tornado

Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.

Hello, world

Here is a simple "Hello, world" example web app for Tornado:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

This example does not use any of Tornado's asynchronous features; for that see this simple chat room.

Documentation

Documentation and links to additional resources are available at https://www.tornadoweb.org

Comments
  • iostream: Fix unreleased memoryview

    iostream: Fix unreleased memoryview

    memoryview should be released explicitly. Otherwise, a bytearray holded by the memoryview is not resizable until GC.

    See also: https://bugs.python.org/issue29178

    opened by methane 41
  • Make SSLIOStream and HTTPServerRequest configurable

    Make SSLIOStream and HTTPServerRequest configurable

    This PR just makes SSLIOStream and HTTPServerRequest inherit from Configurable to address https://github.com/tornadoweb/tornado/issues/2364

    As far as I can tell, all the tests are still passing.

    One drawback though is that there is some monkey patching needed, despite the Configurable, because TCPServer directly uses netutil.ssl_wrap_socket

    iostream 
    opened by chaen 31
  • Memory leak when process big file upload

    Memory leak when process big file upload

    something is wrong in tornado v3.0.1 when i upload a large file (about 101M, larger than default max_buffer_size), then tornado server raise an exception as follows: ERROR:tornado.application:Error in connection callback Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/tcpserver.py", line 228, in _handle_connection self.handle_stream(stream, address) File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/httpserver.py", line 157, in handle_stream self.no_keep_alive, self.xheaders, self.protocol) File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/httpserver.py", line 190, in init self.stream.read_until(b"\r\n\r\n", self._header_callback) File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/iostream.py", line 148, in read_until self._try_inline_read() File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/iostream.py", line 398, in _try_inline_read if self._read_to_buffer() == 0: File "/usr/local/lib/python2.7/dist-packages/tornado-3.0.1-py2.7.egg/tornado/iostream.py", line 432, in _read_to_buffer raise IOError("Reached maximum read buffer size") IOError: Reached maximum read buffer size but after several big file upload request, there is a big memory increase in tornado server, so i think may be there is a memory leak in processing big file any suggestions? how could i fix it up?

    opened by JerryKwan 26
  • Reduce Websocket copies / accept memoryviews

    Reduce Websocket copies / accept memoryviews

    Since masking of inbound (client->server) message is mandated by RFC, a copy in that case is unavoidable. However outbound masking (server->client) is not mandated, and appears to be turned off by default:

    https://github.com/tornadoweb/tornado/blob/master/tornado/websocket.py#L587 https://github.com/tornadoweb/tornado/blob/master/tornado/websocket.py#L461-L462

    (It is set to True in the WS client connection class, as expected)

    This outbound case is the most relevant and important one for Bokeh, so any improvements to reduce copies on outbound messages would be beneficial for Bokeh users.

    Below are some ideas from tracing through the code, I am sure there are many details I am not familiar with, but perhaps this can start a discussion.


    Allow write_messages to accept a memoryview. Then in _write_frame, instead of doing all these concatenations:

    https://github.com/tornadoweb/tornado/blob/master/tornado/websocket.py#L762-L767

    Place the message chunks on the stream write buffer individually. I am not sure if multiple calls to self.stream.write(chunk) would suffice (I'm guessing not), or if iostream.write would have be modified to accept multiple ordered chunks. However, it seems that iostream.write is already capable of storing a list of pending writes when the write buffer is "frozen". Currently all of these buffers get concatenated:

    https://github.com/tornadoweb/tornado/blob/master/tornado/iostream.py#L840

    But perhaps instead of concatenating before clearing pending writes, the list of buffers could be copied instead, then _handle_write could loop over these, instead of expecting one concatenated array.

    websocket 
    opened by bryevdv 25
  • Factor HTTP-specific code into HTTPServer subclass of TCPServer

    Factor HTTP-specific code into HTTPServer subclass of TCPServer

    Since almost no logic in TCPServer was HTTP-specific, this was a surprisingly small change that provides a convenient server base class for application-layer protocols (I've already subclassed this for an SMTP framework). In particular, this makes SSL automatically available to any subclass. If this is accepted, however, TCPServer may need to be moved to a different module altogether, or the httpserver module renamed. I'm also unsure about the name "TCPServer", since using other transport-layer protocols such as UDP may be possible.

    opened by alekstorm 23
  • tornado rejects valid SSL certificates

    tornado rejects valid SSL certificates

    I'm using version 4.2.1, installed with pip:

    $ python
    Python 2.7.6 (default, Jun 22 2015, 17:58:13)
    [GCC 4.8.2] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from tornado.httpclient import HTTPClient
    >>> http_client = HTTPClient()
    >>> response = http_client.fetch('https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip')
    WARNING:tornado.general:SSL Error on 7 ('54.192.87.100', 443): [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python2.7/dist-packages/tornado/httpclient.py", line 102, in fetch
        self._async_client.fetch, request, **kwargs))
      File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 445, in run_sync
        return future_cell[0].result()
      File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
        raise_exc_info(self._exc_info)
      File "<string>", line 3, in raise_exc_info
    ssl.SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
    

    This error also occurs with the following URL:

    http://github.com/hashicorp/consul-template/releases/download/v0.10.0/consul-template_0.10.0_linux_amd64.tar.gz
    
    opened by centromere 22
  • PEP-0492 async/await with AsyncHTTPClient

    PEP-0492 async/await with AsyncHTTPClient

    I'm trying to get the example from the docs working: http://tornado.readthedocs.org/en/latest/guide/coroutines.html#python-3-5-async-and-await with the following code

    import asyncio
    from tornado.httpclient import AsyncHTTPClient
    
    async def main():
        http_client = AsyncHTTPClient()
        response = await http_client.fetch('http://httpbin.org')
        return response.body
    
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    but all I get is a RuntimeError: Task got bad yield: <tornado.concurrent.Future …>

    Am I doing something wrong or is the tornado support for PEP-0492 not complete yet?

    Thanks for your help

    (I am on master for tornado and Python 3.5.0rc1)

    opened by arthurdarcet 22
  • Improve asyncio integration

    Improve asyncio integration

    These changes make it easier to use asyncio features from Tornado applications. The additions include:

    • The wrap_asyncio_future and wrap_tornado_future functions for wrapping an asyncio.Future in a tornado.concurrent.Future and the other way around
    • A @task decorator that makes it possible to use asyncio coroutines with for example tornado.web.RequestHandler.
    • An __iter__ method in tornado.concurrent.Future that makes it possible to use yield from with Tornado futures from asyncio coroutines. The future will be automatically wrapped in an asyncio.Future.
    • A get_asyncio_loop method in BaseAsyncIOLoop to explicitly expose the asyncio event loop used by the Tornado IOLoop. This is especially useful with AsyncIOLoop since it creates a new event loop object otherwise not accessible outside the IOLoop object.

    The most intrusive change here is probably the addition of the __iter__ method in tornado.concurrent.Future. The reasoning here is that you wouldn't normally try to iterate over futures and that the yield from syntax is sufficiently associated with asyncio that it makes sense to reserve it for use with asyncio.

    Caveat: Since asyncio futures can be cancelled, wrap_tornado_future tries to account for this by running set_exception with a CancelledError when the asyncio.Future that wraps the Tornado Future is cancelled. This will however probably not work as expected seeing as the standard Tornado Future doesn't support being cancelled, so I would recommend avoiding trying to cancel any futures returned by wrap_tornado_future.

    Example:

    import asyncio
    import subprocess
    
    import tornado.httpclient
    import tornado.web
    import tornado.platform.asyncio
    
    class RequestHandler(tornado.web.RequestHandler):
        lock = asyncio.Lock()
    
        @tornado.platform.asyncio.task
        def get(self):
            self.write("Hello")
            yield from self.some_function()
            res = yield from tornado.httpclient.AsyncHTTPClient().fetch("http://www.tornadoweb.org")
            print("Got response:", res)
            self.write(" World!")
    
        @asyncio.coroutine
        def some_function(self):
            yield from RequestHandler.lock # only allow one user every two seconds
            yield from asyncio.sleep(2)
            proc = yield from asyncio.create_subprocess_exec("ls", "/", stdout=subprocess.PIPE)
            self.write((yield from proc.communicate())[0])
            RequestHandler.lock.release()
    
    
    tornado.platform.asyncio.AsyncIOMainLoop().install()
    application = tornado.web.Application([
        (r"/?", RequestHandler)
    ])
    application.listen(8080)
    
    loop = asyncio.get_event_loop()
    loop.run_forever()
    
    asyncio 
    opened by arvidfm 21
  • Tornado 4.5.3 hangs with openssl 1.1.1 and TLS 1.3

    Tornado 4.5.3 hangs with openssl 1.1.1 and TLS 1.3

    I want to package the Python 3 version of tornado 4.5.3 (the latest 4.x release) for Debian unstable, because salt does not work with tornado 5 yet. See https://github.com/saltstack/salt-jenkins/issues/995 for details.

    I updated the test certificate and tweaked the source code to work with Python 3.7:

    • https://salsa.debian.org/python-team/modules/python-tornado/commit/2c6f99ca6c18d5934a97fb9e6d5d4fa9030efe42
    • https://salsa.debian.org/python-team/modules/python-tornado/commit/19e541067a64b451ee31b7a3400c48ef61ec082d

    Sadly the a few test cases still fail on Debian unstable (they succeed on Ubuntu 18.04):

    ======================================================================
    ERROR: test_inline_read_error (tornado.test.iostream_test.TestIOStreamSSL)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/test/iostream_test.py", line 556, in test_inline_read_error
        server.read_bytes(1, lambda data: None)
      File "/usr/lib/python3.7/unittest/case.py", line 203, in __exit__
        self._raiseFailure("{} not raised".format(exc_name))
      File "/usr/lib/python3.7/unittest/case.py", line 135, in _raiseFailure
        raise self.test_case.failureException(msg)
    AssertionError: OSError not raised
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 558, in test_inline_read_error
        server.close()
      File "/tmp/building/package/tornado/iostream.py", line 444, in close
        self.close_fd()
      File "/tmp/building/package/tornado/iostream.py", line 1042, in close_fd
        self.socket.close()
      File "/usr/lib/python3.7/socket.py", line 420, in close
        self._real_close()
      File "/usr/lib/python3.7/ssl.py", line 1108, in _real_close
        super()._real_close()
      File "/usr/lib/python3.7/socket.py", line 414, in _real_close
        _ss.close(self)
    OSError: [Errno 9] Bad file descriptor
    
    ======================================================================
    ERROR: test_inline_read_error (tornado.test.iostream_test.TestIOStreamSSLContext)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/test/iostream_test.py", line 556, in test_inline_read_error
        server.read_bytes(1, lambda data: None)
      File "/usr/lib/python3.7/unittest/case.py", line 203, in __exit__
        self._raiseFailure("{} not raised".format(exc_name))
      File "/usr/lib/python3.7/unittest/case.py", line 135, in _raiseFailure
        raise self.test_case.failureException(msg)
    AssertionError: OSError not raised
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 558, in test_inline_read_error
        server.close()
      File "/tmp/building/package/tornado/iostream.py", line 444, in close
        self.close_fd()
      File "/tmp/building/package/tornado/iostream.py", line 1042, in close_fd
        self.socket.close()
      File "/usr/lib/python3.7/socket.py", line 420, in close
        self._real_close()
      File "/usr/lib/python3.7/ssl.py", line 1108, in _real_close
        super()._real_close()
      File "/usr/lib/python3.7/socket.py", line 414, in _real_close
        _ss.close(self)
    OSError: [Errno 9] Bad file descriptor
    
    ======================================================================
    FAIL: test_read_until_close_after_close (tornado.test.iostream_test.TestIOStreamSSL)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 451, in test_read_until_close_after_close
        data = self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    
    ======================================================================
    FAIL: test_streaming_read_until_close_after_close (tornado.test.iostream_test.TestIOStreamSSL)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 481, in test_streaming_read_until_close_after_close
        data = self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    
    ======================================================================
    FAIL: test_write_zero_bytes (tornado.test.iostream_test.TestIOStreamSSL)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 220, in test_write_zero_bytes
        self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    
    ======================================================================
    FAIL: test_read_until_close_after_close (tornado.test.iostream_test.TestIOStreamSSLContext)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 451, in test_read_until_close_after_close
        data = self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    
    ======================================================================
    FAIL: test_streaming_read_until_close_after_close (tornado.test.iostream_test.TestIOStreamSSLContext)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 481, in test_streaming_read_until_close_after_close
        data = self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    
    ======================================================================
    FAIL: test_write_zero_bytes (tornado.test.iostream_test.TestIOStreamSSLContext)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/tmp/building/package/tornado/testing.py", line 136, in __call__
        result = self.orig_method(*args, **kwargs)
      File "/tmp/building/package/tornado/test/iostream_test.py", line 220, in test_write_zero_bytes
        self.wait()
      File "/tmp/building/package/tornado/testing.py", line 336, in wait
        self.__rethrow()
      File "/tmp/building/package/tornado/testing.py", line 272, in __rethrow
        raise_exc_info(failure)
      File "<string>", line 4, in raise_exc_info
      File "/tmp/building/package/tornado/testing.py", line 320, in timeout_func
        timeout)
    AssertionError: Async operation timed out after 5 seconds
    

    You can see the full build and test long here: https://salsa.debian.org/python-team/modules/python-tornado/-/jobs/77766 (you can ignore the test cases that fail with "Cannot assign requested address")

    iostream 
    opened by bdrung 20
  • getting many HTTP 599 errors for valid urls

    getting many HTTP 599 errors for valid urls

    I'm using tornado AsyncHTTPClient with the following code, I basically call the scrape function with a url generator list that contains 10K urls. I expect to have maximum 50 concurrent requests at any time, which doesn't seem to work as the entire process ends in about 2 minutes.

    I got ~200 valid responses and ~9000 HTTP 599 error. I checked many urls that threw this error and they do load in less than 10 sec', I'm able to reach most urls using urllib2/requests with a smaller timeout (5 seconds).

    All requests sent to different servers, running from ubuntu with python 2.7.3 & tornado version = "4.1".

    I suspect that something is wrong as I can fetch most urls using other (blocking) libraries.

    import tornado.ioloop
    import tornado.httpclient
    
    class Fetcher(object):
        def __init__(self, ioloop):
            self.ioloop = ioloop
            self.client = tornado.httpclient.AsyncHTTPClient(io_loop=ioloop, max_clients=50)
            self.client.configure(None, defaults=dict(user_agent="Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36",
                                                      connect_timeout=20,request_timeout=20, validate_cert=False))
    
        def fetch(self, url):
            self.client.fetch(url, self.handle_response)
    
        @property
        def active(self):
            """True if there are active fetching happening"""
            return len(self.client.active) != 0
    
        def handle_response(self, response):
            if response.error:
                print "Error: %s, time: %s, url: %s" % (response.error, response.time_info, response.effective_url)
            else:
               # print "clients %s" % self.client.active
                print "Got %d bytes" % (len(response.body))
    
            if not self.active:
                self.ioloop.stop()
    
    def scrape(urls):
        ioloop = tornado.ioloop.IOLoop.instance()
        ioloop.add_callback(scrapeEverything, *urls)
        ioloop.start()
    
    def scrapeEverything(*urls):
        fetcher = Fetcher(tornado.ioloop.IOLoop.instance())
    
        for url in urls:
            fetcher.fetch(url)
    
    if __name__ == '__main__':
    scrape()
    
    opened by YS- 20
  • Keep-alives and connect_timeout bug

    Keep-alives and connect_timeout bug

    Added support for keep-alive in SimpleAsyncClient, also fixed bug where Async connections would close due to connection timeout even after the connection had been established

    Decided to not go with the suggested implementation of keeping a pool of _HTTPConnections as it seemed cumbersome to maintain all the state of an object that would (potentially) be overwritten every time. Decided instead on keeping a queue of streams (essentially sockets) that are reused as soon as they become available.

    Streams are keyed in the stream_map with a (scheme,host,port) tuple (or some permutation, i forget)

    Client defaults to keep_alive, dead sockets are cycled through if they're dead and not used, when a stream is no longer in use it drops references to the current _HTTPConnection and readies itself for the next one.

    Any suggestions/problems/fixes let me know.

    httpclient 
    opened by NickNeedsAName 20
  • Add executor support to WSGIContainer

    Add executor support to WSGIContainer

    WSGIContainer executes wsgi application code synchronously. If this code blocks (as in the case of database queries), the event loop is frozen. Fortunately, IOLoop.run_in_executor provides a solution. These changes modify WSGIContainer to support a new executor parameter (defaults to the synchronous dummy executor) and executes the wsgi application in the executor using the coroutine pattern.

    All credit for the design goes to the discussion in https://stackoverflow.com/questions/26015116/making-tornado-to-serve-a-request-on-a-separate-thread

    I think this also fixes a couple of related issues:

    1. https://github.com/tornadoweb/tornado/pull/1098/ -- calling WSGIContainer.environ should use the standard Python object model via self.environ to support overriding via inheritance.
    2. https://github.com/tornadoweb/tornado/pull/1075 -- previous ask for executor support. These changes seem more complex based on an older version of the code.

    I wrote a test and verified that it worked and added notes to the docstring. I'm not sure if there's a changelog somewhere that I should edit too? I looked under docs/releases but there's no template for version 6.3 yet.

    Also, I'm not sure if I just missed them but I couldn't find a development guide. I executed the tests using tox -e py311 which I think is good enough for these changes. I'll leave further testing to CI.

    opened by grantjenks 0
  • "Task was destroyed but it is pending!" from _HandlerDelegate.execute

    Our server uses tornado and gets "Task was destroyed but it is pending!" warning from time to time. The destroyed task appears to be the one created in _HandlerDelegate.execute in web.py

            fut = gen.convert_yielded(
                self.handler._execute(transforms, *self.path_args, **self.path_kwargs)
            )
            fut.add_done_callback(lambda f: f.result())
    

    From the caller of _HandlerDelegate.execute, it seems the task is never awaited.

      File "/usr/lib/python3/dist-packages/tornado/platform/asyncio.py", line 199, in start
        self.asyncio_loop.run_forever()
      File "/usr/lib/python3.8/asyncio/base_events.py", line 563, in run_forever
        self._run_once()
      File "/usr/lib/python3.8/asyncio/base_events.py", line 1844, in _run_once
        handle._run()
      File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
        self._context.run(self._callback, *self._args)
      File "/usr/lib/python3/dist-packages/tornado/http1connection.py", line 823, in _server_request_loop
        ret = await conn.read_response(request_delegate)
      File "/usr/lib/python3/dist-packages/tornado/http1connection.py", line 273, in _read_message
        delegate.finish()
      File "/usr/lib/python3/dist-packages/tornado/httpserver.py", line 387, in finish
        self.delegate.finish()
      File "/usr/lib/python3/dist-packages/tornado/routing.py", line 268, in finish
        self.delegate.finish()
      File "/usr/lib/python3/dist-packages/tornado/web.py", line 2290, in finish
        self.execute()
      File "/usr/lib/python3/dist-packages/tornado/web.py", line 2326, in execute
    

    Could anyone help to take a look and see if this is a bug? Thanks.

    opened by yuyang00 2
  • method authorize_redirect is not defined as async for OAuth2Mixin (unlike in OAuthMixin)

    method authorize_redirect is not defined as async for OAuth2Mixin (unlike in OAuthMixin)

    Was this made for a particular reason?

    OAuth2Mixin

    https://github.com/tornadoweb/tornado/blob/master/tornado/auth.py#L553

    OAuthMixin

    https://github.com/tornadoweb/tornado/blob/master/tornado/auth.py#L290

    opened by tzuryby 1
  • Provide a `host` application setting

    Provide a `host` application setting

    The docs, and almost every Tornado code example on the internet adds handlers to an Application via the handlers argument to the constructor. That means the application accepts requests for any host.

    But listening to wildcard hosts is vulnerable to HTTP Host header vulnerability.

    Please provide a new app setting called host which, if set, will be used to match all incoming requests.

    opened by bhch 5
  • With Streamin proxy, handle server early error

    With Streamin proxy, handle server early error

    Im using tornado to build an http proxy between a web client and a storage server. I'm streaming data. I t works well except when the storage server has an error and terminate early. Then the requests is stuck from the client point of view. Do you have an idea how to implement that ? maybe check the connection is up in body_producer ?

    @tornado.web.stream_request_body
    class StreamingProxyHandler(CorsHandler):
        def prepare(self):
            self.chunks = tornado.queues.Queue(maxsize=1)
            self.request.connection.set_max_body_size(1_000_000_000) 
            self.client = tornado.httpclient.AsyncHTTPClient()
            self.fetch = self.client.fetch(
                "http://storageserver",
                method=self.request.method,
                raise_error=False,
                headers={"Content-Length": self.request.headers["Content-Length"]},
                body_producer=self.body_producer if self.request.method == "POST" else None,
                streaming_callback=self.data_fromch,
            )
        async def body_producer(self, write):
            while True:
                chunk = await self.chunks.get()
                if chunk is None:
                    return
                await write(chunk)
    
        async def data_received(self, chunk):
            await self.chunks.put(chunk)
    
        def data_fromserver(self, chunk):
            self.write(chunk)
        
        async def post(self, *_):
            await self.chunks.put(None)
            resp = await self.fetch
            self.set_status(resp.code)`
    
    opened by julienfr112 2
  • Add a decorator that normalizes the request duration to prevent enumeration

    Add a decorator that normalizes the request duration to prevent enumeration

    For example, if we have an API /server/login and it takes on average ~3 ms for failure scenarios and ~7 ms for success scenarios, this would help an attacker enumerate different inputs and gauge the validity of fields (Maybe usernames) based on the response duration.

    If we can have a decorator that will ensure that the API call takes at least the specified duration, this will render such enumeration techniques ineffective. For eg:

    @api_duration_in_ms(10)
    def get(self):
        ...
    

    The above code will ensure that the API takes at least 10ms before returning a response irrespective of success/failure. This addition will be useful for administrative operations that aren't as frequent and where a small delay will not affect functionality.

    Please help review the request and share your thoughts. If it sounds good, I can work on it :)

    opened by the-c0d3br34k3r 2
cirrina is an opinionated asynchronous web framework based on aiohttp

cirrina cirrina is an opinionated asynchronous web framework based on aiohttp. Features: HTTP Server Websocket Server JSON RPC Server Shared sessions

André Roth 32 Mar 5, 2022
An easy-to-use high-performance asynchronous web framework.

An easy-to-use high-performance asynchronous web framework.

Aber 264 Dec 31, 2022
An easy-to-use high-performance asynchronous web framework.

中文 | English 一个易用的高性能异步 web 框架。 Index.py 文档 Index.py 实现了 ASGI3 接口,并使用 Radix Tree 进行路由查找。是最快的 Python web 框架之一。一切特性都服务于快速开发高性能的 Web 服务。 大量正确的类型注释 灵活且高效的

Index.py 264 Dec 31, 2022
Pretty tornado wrapper for making lightweight REST API services

CleanAPI Pretty tornado wrapper for making lightweight REST API services Installation: pip install cleanapi Example: Project folders structure: . ├──

Vladimir Kirievskiy 26 Sep 11, 2022
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.2k Jan 5, 2023
web.py is a web framework for python that is as simple as it is powerful.

web.py is a web framework for Python that is as simple as it is powerful. Visit http://webpy.org/ for more information. The latest stable release 0.62

null 5.8k Dec 30, 2022
Asita is a web application framework for python based on express-js framework.

Asita is a web application framework for python. It is designed to be easy to use and be more easy for javascript users to use python frameworks because it is based on express-js framework.

Mattéo 4 Nov 16, 2021
A very simple asynchronous wrapper that allows you to get access to the Oracle database in asyncio programs.

cx_Oracle_async A very simple asynchronous wrapper that allows you to get access to the Oracle database in asyncio programs. Easy to use , buy may not

null 36 Dec 21, 2022
News search API developed for the purposes of the ColdCase Project.

Saxion - Cold Case - News Search API Setup Local – Linux/MacOS Make sure you have python 3.9 and pip 21 installed. This project uses a MySQL database,

Dimitar Rangelov 3 Jul 1, 2021
The Modern And Developer Centric Python Web Framework. Be sure to read the documentation and join the Slack channel questions: http://slack.masoniteproject.com

NOTE: Masonite 2.3 is no longer compatible with the masonite-cli tool. Please uninstall that by running pip uninstall masonite-cli. If you do not unin

Masonite 1.9k Jan 4, 2023
Free and open source full-stack enterprise framework for agile development of secure database-driven web-based applications, written and programmable in Python.

Readme web2py is a free open source full-stack framework for rapid development of fast, scalable, secure and portable database-driven web-based applic

null 2k Dec 31, 2022
Bionic is Python Framework for crafting beautiful, fast user experiences for web and is free and open source

Bionic is fast. It's powered core python without any extra dependencies. Bionic offers stateful hot reload, allowing you to make changes to your code and see the results instantly without restarting your app or losing its state.

 ⚓ 0 Mar 5, 2022
bottle.py is a fast and simple micro-framework for python web-applications.

Bottle: Python Web Framework Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module a

Bottle Micro Web Framework 7.8k Dec 31, 2022
Sierra is a lightweight Python framework for building and integrating web applications

A lightweight Python framework for building and Integrating Web Applications. Sierra is a Python3 library for building and integrating web applications with HTML and CSS using simple enough syntax. You can develop your web applications with Python, taking advantage of its functionalities and integrating them to the fullest.

null 83 Sep 23, 2022
Flask Sugar is a web framework for building APIs with Flask, Pydantic and Python 3.6+ type hints.

Flask Sugar is a web framework for building APIs with Flask, Pydantic and Python 3.6+ type hints. check parameters and generate API documents automatically. Flask Sugar是一个基于flask,pyddantic,类型注解的API框架, 可以检查参数并自动生成API文档

null 162 Dec 26, 2022
Fast⚡, simple and light💡weight ASGI micro🔬 web🌏-framework for Python🐍.

NanoASGI Asynchronous Python Web Framework NanoASGI is a fast ⚡ , simple and light ?? weight ASGI micro ?? web ?? -framework for Python ?? . It is dis

Kavindu Santhusa 8 Jun 16, 2022
Dazzler is a Python async UI/Web framework built with aiohttp and react.

Dazzler is a Python async UI/Web framework built with aiohttp and react. Create dazzling fast pages with a layout of Python components and bindings to update from the backend.

Philippe Duval 17 Oct 18, 2022
APIFlask is a lightweight Python web API framework based on Flask and marshmallow-code projects

APIFlask APIFlask is a lightweight Python web API framework based on Flask and marshmallow-code projects. It's easy to use, highly customizable, ORM/O

Grey Li 705 Jan 4, 2023
Async Python 3.6+ web server/framework | Build fast. Run fast.

Sanic | Build fast. Run fast. Build Docs Package Support Stats Sanic is a Python 3.6+ web server and web framework that's written to go fast. It allow

Sanic Community Organization 16.7k Jan 8, 2023