Asynchronous HTTP client/server framework for asyncio and Python

Overview

Async http client/server framework

aiohttp logo


GitHub Actions status for master branch codecov.io status for master branch Latest PyPI package version Downloads count Latest Read The Docs Discourse status Chat on Gitter

Key Features

  • Supports both client and server side of HTTP protocol.
  • Supports both client and server Web-Sockets out-of-the-box and avoids Callback Hell.
  • Provides Web-server with middlewares and plugable routing.

Getting started

Client

To get something from the web:

import aiohttp
import asyncio

async def main():

    async with aiohttp.ClientSession() as session:
        async with session.get('http://python.org') as response:

            print("Status:", response.status)
            print("Content-type:", response.headers['content-type'])

            html = await response.text()
            print("Body:", html[:15], "...")

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

This prints:

Status: 200
Content-type: text/html; charset=utf-8
Body: <!doctype html> ...

Coming from requests ? Read why we need so many lines.

Server

An example using a simple server:

# examples/server_simple.py
from aiohttp import web

async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

async def wshandle(request):
    ws = web.WebSocketResponse()
    await ws.prepare(request)

    async for msg in ws:
        if msg.type == web.WSMsgType.text:
            await ws.send_str("Hello, {}".format(msg.data))
        elif msg.type == web.WSMsgType.binary:
            await ws.send_bytes(msg.data)
        elif msg.type == web.WSMsgType.close:
            break

    return ws


app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/echo', wshandle),
                web.get('/{name}', handle)])

if __name__ == '__main__':
    web.run_app(app)

Documentation

https://aiohttp.readthedocs.io/

Demos

https://github.com/aio-libs/aiohttp-demos

External links

Feel free to make a Pull Request for adding your link to these pages!

Communication channels

aio-libs discourse group: https://aio-libs.discourse.group

gitter chat https://gitter.im/aio-libs/Lobby

We support Stack Overflow. Please add aiohttp tag to your question there.

Requirements

Optionally you may install the cChardet and aiodns libraries (highly recommended for sake of speed).

License

aiohttp is offered under the Apache 2 license.

Keepsafe

The aiohttp community would like to thank Keepsafe (https://www.getkeepsafe.com) for its support in the early days of the project.

Source code

The latest developer version is available in a GitHub repository: https://github.com/aio-libs/aiohttp

Benchmarks

If you are interested in efficiency, the AsyncIO community maintains a list of benchmarks on the official wiki: https://github.com/python/asyncio/wiki/Benchmarks

Comments
  • aiohttp 3.0 release

    aiohttp 3.0 release

    I'd like to name the next release 3.0 3.0 means the major.

    Let's drop Python 3.4 and use async/await everywhere. Another important question is what 3.5 release to cut off?

    Honestly I want to support 3.5.3+ only: the release fixes well known ugly bug for asyncio.get_event_loop(). Debian stable has shipped with Python 3.5.3, not sure about RHEL. All other distributions with faster release cycle support 3.5.3 too at least or event 3.6.

    The transition should not be done at once.

    1. Let's pin minimal required Python version first in setup.py.
    2. Translate test suite to use async/await.
    3. Modify aiohttp itself to use new syntax.

    Every bullet except number one could be done in a long series of PRs, part by part.

    BTW at the end I hope to get minor performance increase :)

    meta outdated 
    opened by asvetlov 62
  • Fails to build on Python 3.11 - longintrepr.h: No such file or directory

    Fails to build on Python 3.11 - longintrepr.h: No such file or directory

    Describe the bug

    Because of https://github.com/python/cpython/pull/28968, cython fixed it in https://github.com/cython/cython/pull/4428 and is released with 0.29.5

    To Reproduce

    pip install aiohttp

    Expected behavior

    To build the wheel.

    Logs/tracebacks

    Failed to build aiohttp yarl
      error: subprocess-exited-with-error
    
      × Building wheel for aiohttp (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> [31 lines of output]
          *********************
          * Accelerated build *
          *********************
          running bdist_wheel
          running build
          running build_py
          running egg_info
          writing aiohttp.egg-info/PKG-INFO
          writing dependency_links to aiohttp.egg-info/dependency_links.txt
          writing requirements to aiohttp.egg-info/requires.txt
          writing top-level names to aiohttp.egg-info/top_level.txt
          reading manifest file 'aiohttp.egg-info/SOURCES.txt'
          reading manifest template 'MANIFEST.in'
          warning: no files found matching 'aiohttp' anywhere in distribution
          warning: no previously-included files matching '*.pyc' found anywhere in distribution
          warning: no previously-included files matching '*.pyd' found anywhere in distribution
          warning: no previously-included files matching '*.so' found anywhere in distribution
          warning: no previously-included files matching '*.lib' found anywhere in distribution
          warning: no previously-included files matching '*.dll' found anywhere in distribution
          warning: no previously-included files matching '*.a' found anywhere in distribution
          warning: no previously-included files matching '*.obj' found anywhere in distribution
          warning: no previously-included files found matching 'aiohttp/*.html'
          no previously-included directories found matching 'docs/_build'
          adding license file 'LICENSE.txt'
          running build_ext
          building 'aiohttp._websocket' extension
          aiohttp/_websocket.c:198:12: fatal error: longintrepr.h: No such file or directory
            198 |   #include "longintrepr.h"
                |            ^~~~~~~~~~~~~~~
          compilation terminated.
          error: command '/usr/bin/gcc' failed with exit code 1
          [end of output]
    
    

    Python Version

    3.11
    

    aiohttp Version

    latest
    

    multidict Version

    n/a
    

    yarl Version

    latest
    

    OS

    Any

    Related component

    Server, Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by gaborbernat 53
  • aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    Long story short

    The CA file is working with cURL, Python Requests, but not aiohttp, when using SSL_CERT_DIR and or SSL_CERT_FILE environment variables.

    Our environment uses its own CA root used to decode/encode HTTPS API requests/responses to provide a short lived cache to prevent excessing external requests.

    The environment has the following set:

    $ (set -o posix; set) | egrep 'SSL|_CA'
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    The ca.pem can be successfully used by cURL - with both a positive and negative test shown:

    curl --cacert /home/creslin/poo/freqcache/cert/ca.pem https://api.binance.com/api/v1/time
    {"serverTime":1533719563552}
    
    curl --cacert /home/creslin/NODIRHERE/ca.pem https://api.binance.com/api/v1/time
    curl: (77) error setting certificate verify locations:
      CAfile: /home/creslin/NODIRHERE/ca.pem
      CApath: /etc/ssl/certs
    

    A simple python requests script req.py also works as expected, positive and negative tests

    cat req.py 
    import requests
    req=requests.get('https://api.binance.com/api/v1/time', verify=True)
    print(req.content)
    
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    
    python3 req.py 
    b'{"serverTime":1533720141278}'
    
    CURL_CA_BUNDLE=/
    REQUESTS_CA_BUNDLE=/
    SSL_CERT_DIR=/
    SSL_CERT_FILE=/
    
    python3 req.py 
    Traceback (most recent call last):
      File "/home/creslin/freqt .......
    
    ..... ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    

    Using aysnc/aiohttp [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833) is returned always. The environment settings pointing to the ca.pem shown to work for both cURL and requests are seemingly ignored

    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    I have the test script a.py as

    cat a.py 
    import aiohttp
    import ssl
    import asyncio
    import requests
    
    print("\n requests.certs.where", requests.certs.where())
    print("\n ssl version", ssl.OPENSSL_VERSION)
    print("\n ssl Paths", ssl.get_default_verify_paths() ,"\n")
    f = open('/home/creslin/poo/freqcache/cert/ca.crt', 'r') # check perms are ok
    f.close()
    
    async def main():
        session = aiohttp.ClientSession()
        async with session.get('https://api.binance.com/api/v1/time') as response:
            print(await response.text())
        await session.close()
    
    if __name__ == "__main__":
        loop = asyncio.get_event_loop()
        loop.run_until_complete(main())
    

    Which will always produce the failure - output in full:

     requests.certs.where /home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/certifi/cacert.pem
    
     ssl version OpenSSL 1.1.0g  2 Nov 2017
    
     ssl Paths DefaultVerifyPaths(cafile=None, capath='/home/creslin/poo/freqcache/cert/', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/lib/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/lib/ssl/certs') 
    
    Traceback (most recent call last):
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 822, in _wrap_create_connection
        return await self._loop.create_connection(*args, **kwargs)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 804, in create_connection
        sock, protocol_factory, ssl, server_hostname)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 830, in _create_connection_transport
        yield from waiter
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 505, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 201, in feed_ssldata
        self._sslobj.do_handshake()
      File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
        self._sslobj.do_handshake()
    ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "a.py", line 20, in <module>
        loop.run_until_complete(main())
      File "/usr/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
        return future.result()
      File "a.py", line 14, in main
        async with session.get('https://api.binance.com/api/v1/time') as response:
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 843, in __aenter__
        self._resp = await self._coro
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 366, in _request
        timeout=timeout
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 445, in connect
        proto = await self._create_connection(req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 757, in _create_connection
        req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 879, in _create_direct_connection
        raise last_exc
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 862, in _create_direct_connection
        req=req, client_error=client_error)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 827, in _wrap_create_connection
        raise ClientConnectorSSLError(req.connection_key, exc) from exc
    aiohttp.client_exceptions.ClientConnectorSSLError: Cannot connect to host api.binance.com:443 ssl:None [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)]
    Unclosed client session
    client_session: <aiohttp.client.ClientSession object at 0x7fa6bf9de898>
    
    

    Expected behaviour

    aiohttp does not reject server certificate.

    Actual behaviour

    SSL verification error

    Steps to reproduce

    use own CA root certificate to trust HTTPS server.

    Your environment

    Ubuntu 18.04 Python 3.6.5 ssl version OpenSSL 1.1.0g 2 Nov 2017 Name: aiohttp Version: 3.3.2 Name: requests Version: 2.19.1 Name: certifi Version: 2018.4.16

    invalid question outdated 
    opened by creslinux 51
  • ASGI support?

    ASGI support?

    Long story short

    Currently, most of asyncio based web frameworks embeds http server into the web framework. If http/ws servers would use ASGI standard, then it would be possible to run your app using different http/ws servers.

    That is why I think it is important for aiohttp to add ASGI support.

    Expected behaviour

    I would like to write aiohttp app:

    from aiohttp import web
    
    async def hello(request):
        return web.Response(text="Hello world!")
    
    app = web.Application()
    app.add_routes([web.get('/', hello)])
    

    And run it with any ASGI compatible server:

    > daphne app:app
    
    > http -b get :8080/
    Hello world!
    

    Actual behaviour

    > daphne app:app
    2018-04-02 12:48:51,097 ERROR    Traceback (most recent call last):
      File "daphne/http_protocol.py", line 158, in process
        "server": self.server_addr,
      File "daphne/server.py", line 184, in create_application
        application_instance = self.application(scope=scope)
    TypeError: __call__() got an unexpected keyword argument 'scope'
    
    127.0.0.1:41828 - - [02/Apr/2018:12:48:51] "GET /" 500 452
    
    > http -b get :8080/
    <html>
      <head>
        <title>500 Internal Server Error</title>
      </head>
      <body>
        <h1>500 Internal Server Error</h1>
        <p>Daphne HTTP processing error</p>
        <footer>Daphne</footer>
      </body>
    </html>
    

    ASGI resources

    https://github.com/django/asgiref/blob/master/specs/asgi.rst - ASGI specification.

    https://github.com/django/asgiref/blob/master/specs/www.rst - ASGI-HTTP and ASGI-WebSocket protocol specifications.

    Example ASGI app:

    import json
    
    def app(scope):
        async def channel(receive, send):
            message = await receive()
    
            if scope['method'] == 'POST':
                response = message
            else:
                response = scope
    
            await send({
                'type': 'http.response.start',
                'status': 200,
                'headers': [
                    [b'Content-Type', b'application/json'],
                ],
            })
            await send({
                'type': 'http.response.body',
                'body': json.dumps(response, default=bytes.decode).encode(),
            })
            await send({
                'type': 'http.disconnect',
            })
        return channel
    
    > daphne app:app
    2018-03-31 22:28:10,823 INFO     Starting server at tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,824 INFO     HTTP/2 support enabled
    2018-03-31 22:28:10,824 INFO     Configuring endpoint tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,825 INFO     Listening on TCP address 127.0.0.1:8000
    127.0.0.1:43436 - - [31/Mar/2018:22:28:17] "GET /" 200 347
    127.0.0.1:43440 - - [31/Mar/2018:22:28:22] "POST /" 200 43
    127.0.0.1:43446 - - [31/Mar/2018:22:28:42] "POST /" 200 54
    
    > http -b get :8000/
    {
        "type": "http"
        "http_version": "1.1",
        "method": "GET",
        "path": "/",
        "query_string": "",
        "root_path": "",
        "scheme": "http",
        "headers": [
            ["host", "localhost:8000"],
            ["user-agent", "HTTPie/0.9.9"],
            ["accept-encoding", "gzip, deflate"],
            ["accept", "*/*"],
            ["connection", "keep-alive"]
        ],
        "client": ["127.0.0.1", 43360],
        "server": ["127.0.0.1", 8000],
    }
    
    > http -b -f post :8000/ foo=bar
    {
        "body": "foo=bar",
        "type": "http.request"
    }
    
    > http -b -j post :8000/ foo=bar
    {
        "body": "{\"foo\": \"bar\"}",
        "type": "http.request"
    }
    
    outdated 
    opened by sirex 44
  • Degrading performance over time...

    Degrading performance over time...

    I wrote a quick AsyncScraper class below:

    import logging, datetime, time
    import aiohttp
    import asyncio
    import uvloop
    
    # asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
    
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)
    logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    logger.addHandler(logging.StreamHandler())
    
    class AsyncScraper(object):
    	headers = {"User-Agent" : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'}
    	def __init__(self, max_connections=1000, timeout=10):
    		self.max_connections = max_connections
    		self.timeout = timeout
    		
    	async def get_response(self, url, session):
    		with aiohttp.Timeout(timeout=self.timeout):
    			async with session.get(url, allow_redirects=True, headers=AsyncScraper.headers, timeout=self.timeout) as response:
    				try:
    					content = await response.text()
    					return {'error': "", 'status': response.status, 'url':url, 'content': content, 'timestamp': str(datetime.datetime.utcnow())}
    				except Exception as err:
    					return {'error': err, 'status': "", 'url':url, 'content': "", 'timestamp': str(datetime.datetime.utcnow())}
    				finally:
    					response.close()
    
    	def get_all(self, urls):
    		loop = asyncio.get_event_loop()
    		with aiohttp.ClientSession(loop=loop, connector=aiohttp.TCPConnector(keepalive_timeout=10, limit=self.max_connections, verify_ssl=False)) as session:
    			tasks = asyncio.gather(*[self.get_response(url, session) for url in urls], return_exceptions=True)
    			results = loop.run_until_complete(tasks)
    			return results
    
    def chunks(l, n):
    	for i in range(0, len(l), n):
    		yield l[i:i + n]
    
    def process_urls(urls, chunk_size=1000):
    	scraper = AsyncScraper()
    
    	results = []
    	t0 = time.time()
    	for i, urls_chunk in enumerate(chunks(sorted(set(urls)), chunk_size)):
    		t1 = time.time()
    		result = scraper.get_all(urls_chunk)
    		success_size = len( [_ for _ in result if ((isinstance(_, Exception) is False) and (_['status']==200)) ] )
    		results.extend(result)
    		logger.debug("batch {} => success: {} => iteration time: {}s =>  total time: {}s => total processed {}".format(i+1, success_size, time.time()-t1, time.time()-t0, len(results)))
    	return results
    

    and I've run into two main issues:

    1. If I pass in a flat list of URLs, say 100k (via the get_all method), I get flooded with errors:

      2017-04-17 15:50:53,541 - asyncio - ERROR - Fatal error on SSL transport protocol: <asyncio.sslproto.SSLProtocol object at 0x10d5439b0> transport: <_SelectorSocketTransport closing fd=612 read=idle write=<idle, bufsize=0>> Traceback (most recent call last): File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 639, in _process_write_backlog ssldata = self._sslpipe.shutdown(self._finalize) File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 151, in shutdown raise RuntimeError('shutdown in progress') RuntimeError: shutdown in progress

    2. I then batched the URLs in chunks of 1,000, and timed the response between batches. And I was clearly able to measure the performance decay over time (see below). Moreover, the number of errors increased over time... What am I doing wrong?

      iteration 0 done in 16.991s iteration 1 done in 39.376s iteration 2 done in 35.656s iteration 3 done in 19.716s iteration 4 done in 29.331s iteration 5 done in 19.708s iteration 6 done in 19.572s iteration 7 done in 29.907s iteration 8 done in 23.379s iteration 9 done in 21.762s iteration 10 done in 22.091s iteration 11 done in 22.940s iteration 12 done in 31.285s iteration 13 done in 24.549s iteration 14 done in 26.297s iteration 15 done in 23.816s iteration 16 done in 29.094s iteration 17 done in 24.885s iteration 18 done in 26.456s iteration 19 done in 27.412s iteration 20 done in 29.969s iteration 21 done in 28.503s iteration 22 done in 28.699s iteration 23 done in 31.570s iteration 26 done in 31.898s iteration 27 done in 33.553s iteration 28 done in 34.022s iteration 29 done in 33.866s iteration 30 done in 36.351s iteration 31 done in 40.060s iteration 32 done in 35.523s iteration 33 done in 36.607s iteration 34 done in 36.325s iteration 35 done in 38.425s iteration 36 done in 39.106s iteration 37 done in 38.972s iteration 38 done in 39.845s iteration 39 done in 40.393s iteration 40 done in 40.734s iteration 41 done in 47.799s iteration 42 done in 43.070s iteration 43 done in 43.365s iteration 44 done in 42.081s iteration 45 done in 44.118s iteration 46 done in 44.955s iteration 47 done in 45.400s iteration 48 done in 45.987s iteration 49 done in 46.041s iteration 50 done in 45.899s iteration 51 done in 49.008s iteration 52 done in 49.544s iteration 53 done in 55.432s iteration 54 done in 52.590s iteration 55 done in 50.185s iteration 56 done in 52.858s iteration 57 done in 52.698s iteration 58 done in 53.048s iteration 59 done in 54.120s iteration 60 done in 54.151s iteration 61 done in 55.465s iteration 62 done in 56.889s iteration 63 done in 56.967s iteration 64 done in 57.690s iteration 65 done in 57.052s iteration 66 done in 67.214s iteration 67 done in 58.457s iteration 68 done in 60.882s iteration 69 done in 58.440s iteration 70 done in 60.755s iteration 71 done in 58.043s iteration 72 done in 65.076s iteration 73 done in 63.371s iteration 74 done in 62.800s iteration 75 done in 62.419s iteration 76 done in 61.376s iteration 77 done in 63.164s iteration 78 done in 65.443s iteration 79 done in 64.616s iteration 80 done in 69.544s iteration 81 done in 68.226s iteration 82 done in 78.050s iteration 83 done in 67.871s iteration 84 done in 69.780s iteration 85 done in 67.812s iteration 86 done in 68.895s iteration 87 done in 71.086s iteration 88 done in 68.809s iteration 89 done in 70.945s iteration 90 done in 72.760s iteration 91 done in 71.773s iteration 92 done in 72.522s

    The time here corresponds to the iteration time to process 1,000 URLs. Please advise. Thanks

    outdated 
    opened by vgoklani 44
  • Memory leak in request

    Memory leak in request

    Hi all,

    Since I upgraded to 0.14.4 (from 0.9.0) I am experiencing memory leaks in a Dropbox-API longpoller. It is a single process that spawns a few thousands of greenlets. Each greenlet performs a request(), that blocks for 30 seconds, then parses the response and dies. Then a new greenlet is spawned.

    I am running on python 3.4.0, Ubuntu 14.04. I use the connection pool feature, passing the same connector singleton to each .request() call.

    I played with tracemalloc, dumping a <N>.dump stat file every minute and found out that the response parser instances keep increasing in number (look at the third line of each stat)

    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('5.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130540 count=82650>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2176408 count=4437>]
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('6.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2199704 count=4463>]
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('7.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2231064 count=4498>]
    

    tracemalloc reports this stack trace:

    python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('3.dump').filter_traces([tracemalloc.Filter(True, '*aiohttp/parsers.py')]).statistics('traceback')[0].traceback.format())"
    ['  File "/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py", line '
     '198',
     '    p = parser(output, self._buffer)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 633',
     '    httpstream = self._reader.set_parser(self._response_parser)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 108',
     '    yield from resp.start(conn, read_until_eof)',
     '  File "/src/xxxxxx/main.py", line 70',
     '    connector=LONGPOLL_CONNECTOR',
    

    Looks like there is something keeping alive those parsers....

    Using force_close=True on the connector makes no difference.

    Then I tried calling gc.collect() after every single request, and is going much better ~~but the leak has not~~ the leak has disappeared completely-. This means (maybe is an unrelated issue) the library creates more reference cycles thant the CGC can handle.

    It my well be my own bug, or maybe something to do with python 3.4.0 itself. I'm still digging into it.

    outdated 
    opened by mpaolini 44
  • Add json_response funciton

    Add json_response funciton

    Should be derived from aiohttp.web.Response.

    Constructor signature is: def __init__(self, data, *, status=200, reason=None, headers=None)

    Should pack data arg as json.dumps() and set content type to application/json.

    People forget to specify proper content type on sending json data.

    good first issue outdated 
    opened by asvetlov 42
  • Added a configuration flag for enable request task handler cancelling when client connection closing.

    Added a configuration flag for enable request task handler cancelling when client connection closing.

    Related to #6719 #6727. Added a configuration flag for enable request task handler cancelling when client connection closing.

    After changes in version 3.8.3, there is no longer any way to enable this behaviour. In our services, we want to handle protocol-level errors, for example for canceling the execution of a heavy query in the DBMS if the user's connection is broken.

    Now I created this PR in order to discuss my solution, of course if I did everything well I will add tests changelog, etc.

    I guess this breakdown can be solved using the configuration flag that is passed to the Server instance.

    Of course AppRunner and SiteRunner can pass this through **kwargs too.

    Related issue number #6719

    Checklist

    • [ ] I think the code is well written
    • [ ] Unit tests for the changes exist
    • [ ] Documentation reflects the changes
    • [ ] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided backport-3.9 
    opened by mosquito 41
  • ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    The following very simple aiohttp client:

    #!/usr/bin/env python3
    
    import aiohttp
    import asyncio
    
    async def fetch(session, url):
        async with session.get(url) as response:
            print("%s launched" % url)
            return response
    
    async def main():
        async with aiohttp.ClientSession() as session:
            python = await fetch(session, 'https://python.org')
            print("Python: %s" % python.status)
            
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    produces the following exception:

    https://python.org launched
    Python: 200
    SSL error in data received
    protocol: <asyncio.sslproto.SSLProtocol object at 0x7fdec8d42208>
    transport: <_SelectorSocketTransport fd=8 read=polling write=<idle, bufsize=0>>
    Traceback (most recent call last):
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
        self._sslobj.unwrap()
      File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
        return self._sslobj.shutdown()
    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)
    

    I noticed bug #3477 but it is closed and the problem is still there (I have the latest pip version).

    % python --version
    Python 3.7.2
    
    % pip show aiohttp
    Name: aiohttp
    Version: 3.5.4
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: Nikolay Kim
    Author-email: [email protected]
    License: Apache 2
    Location: /usr/lib/python3.7/site-packages
    Requires: chardet, multidict, attrs, async-timeout, yarl
    Required-by: 
    
    bug 
    opened by bortzmeyer 40
  • replace http parser?

    replace http parser?

    @asvetlov should we replace http parser with https://github.com/MagicStack/httptools? looks good http://magic.io/blog/uvloop-make-python-networking-great-again/

    outdated 
    opened by fafhrd91 38
  • docs syntax highlighting missing

    docs syntax highlighting missing

    starting with 0.18.0, the syntax highlighting for all but the first code block on any page vanished.

    it seems some change on readthedocs, docutils or sphinx made .. highlight:: python not work properly.

    and index.rst doesn’t have that directive, so the magic for autodetecting language or whatever made it work before is also gone.

    outdated 
    opened by flying-sheep 38
  • remove import aliases

    remove import aliases

    What do these changes do?

    Clean up imports by removing import aliases

    Are there changes in behavior for the user?

    No

    Related issue number

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by dtrifiro 2
  • Add response proxy headers to ClientResponse

    Add response proxy headers to ClientResponse

    What do these changes do?

    Add to the ClientResponse the headers and raw headers of the underlying CONNECT call in a proxied HTTPS request.

    The headers and raw headers from the response are added to the ResponseHandler and later when processing the request are added to the response.

    Are there changes in behavior for the user?

    No.

    Related issue number

    Closes #6078

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by galaxyfeeder 2
  • client: fix chunked upload timeouts with sock_read

    client: fix chunked upload timeouts with sock_read

    What do these changes do?

    Prevent the timeout callback from firing by calling data_received after each chunk has been written: this reschedules the timeout and prevents the protocol from timing out while data is being sent.

    Are there changes in behavior for the user?

    No

    Related issue number

    #7149

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by dtrifiro 0
  • client: unexpected timeouts with chunked uploads when setting sock_read timeout

    client: unexpected timeouts with chunked uploads when setting sock_read timeout

    Describe the bug

    Uploading large files using put/post with chunked encoding AND setting sock_read timeout results in an unexpected timeout error.

    Traceback is provided below.

    To Reproduce

    Server code (aiohttp):

    import asyncio
    import aiohttp
    
    PORT = 8472
    HOST = "localhost"
    
    
    async def handler(request: aiohttp.web.Request) -> aiohttp.web.StreamResponse:
        loop = asyncio.get_running_loop()
        print(f"[{loop.time()}] got request")
    
        response = aiohttp.web.StreamResponse()
        response.enable_chunked_encoding()
        writer = await response.prepare(request)
    
        read_data = 0
        print(f"[{loop.time()}] started streaming body")
    
        async for data, _ in request.content.iter_chunks():
            read_data += len(data)
            print(f"[{loop.time()}] {read_data=}", end="\r")
            await asyncio.sleep(1e-4)    # NOTE: tweaking this and/or the above print, along with `sock_read` in the client might yield different results (successful upload instead of timeout)
    
        await writer.write_eof()
        print(f"\n[{loop.time()}] finished")
    
        print(f"[{loop.time()}] finished reading body {read_data=}")
        return response
    
    
    async def main():
        """ listens and streams the body of a chunked transfer upload """
        loop = asyncio.get_event_loop()
    
        server = aiohttp.web.Server(handler)
        await loop.create_server(server, HOST, PORT)
        print(f"Listening on http://{HOST}:{PORT}")
    
        try:
            while True:
                await asyncio.sleep(1)
        except KeyboardInterrupt:
            print("Shutting down")
            await server.shutdown(10)
    
    
    if __name__ == "__main__":
        asyncio.run(main())
    

    Client code:

    import asyncio
    import aiohttp
    
    PORT = 8472
    HOST = "localhost"
    
    
    SOCK_READ_TIMEOUT = 5 # tweaking this and/or the sleep in the server implementation might yield different results
    
    
    async def gen_chunks(
        payload_size: int,
        chunk_size: int = 0x1000,  # 4k
    ):
        processed = 0
        while processed < payload_size:
            if (processed + chunk_size) > payload_size:
                # final chunk
                chunk_size = payload_size - processed
            yield b"Z" * chunk_size
            processed += chunk_size
    
    
    async def main(payload_size: int, method: str = "put", **kwargs):
        url = f"http://{HOST}:{PORT}"
        remote_path = "file"
    
        timeout = aiohttp.ClientTimeout(
            total=None,
            connect=0,
            sock_connect=0,
            sock_read=SOCK_READ_TIMEOUT,
        )
        async with aiohttp.ClientSession(timeout=timeout) as session:
            meth = getattr(session, method)
            async with meth(
                f"{url}/{remote_path}",
                data=gen_chunks(payload_size),
                **kwargs,
            ) as resp:
                resp.raise_for_status()
                print(f"{resp.status=}, {await resp.read()}")
    
    
    if __name__ == "__main__":
        payload_1gb = 1024 * 0x100000
    
        asyncio.run(main(payload_1gb))
    

    Expected behavior

    Upload should succeed with no timeout.

    In particular, the ServerTimeoutError exception is raised because we're waiting to read from the protocol while StreamWriter is still sending data.

    When calling _request, we call conn.protocol.set_response_params(..., read_timeout=real_timeout.sock_read, ...) which sets up a timeout handler callback that fires in real_timeout.sock_read seconds, although this is rescheduled whenever data is received ( see ResponseHandler.data_received, ResponseHandler._reschedule_timeout.

    Since nothing will be read from the ResponseHandler until all of the chunks are sent to the server, any upload that takes longer than sock_read s to upload will fail with a timeout.

    A simple workaround for this is to reschedule the read timer after each chunk is sent through StreamWriter, something like this:

    diff --git a/aiohttp/http_writer.py b/aiohttp/http_writer.py
    index db3d6a04..a5b241d8 100644
    --- a/aiohttp/http_writer.py
    +++ b/aiohttp/http_writer.py
    @@ -114,6 +114,7 @@ class StreamWriter(AbstractStreamWriter):
                     chunk = chunk_len_pre + chunk + b"\r\n"
     
                 self._write(chunk)
    +            self.protocol._reschedule_timeout()
     
                 if self.buffer_size > LIMIT and drain:
                     self.buffer_size = 0
    

    Or, if we wish to avoid calling _reschedule_timeout directly:

    diff --git a/aiohttp/http_writer.py b/aiohttp/http_writer.py
    index db3d6a04..7a274c56 100644
    --- a/aiohttp/http_writer.py
    +++ b/aiohttp/http_writer.py
    @@ -114,6 +114,7 @@ class StreamWriter(AbstractStreamWriter):
                     chunk = chunk_len_pre + chunk + b"\r\n"
     
                 self._write(chunk)
    +            self.protocol.data_received(b"")
     
                 if self.buffer_size > LIMIT and drain:
                     self.buffer_size = 0
    

    Logs/tracebacks

    Traceback (most recent call last):
      File "aiohttp/timeouts_investigation.py", line 291, in <module>
        asyncio.run(main_client(payload_size))
      File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
        return future.result()
      File "aiohttp/timeouts_investigation.py", line 140, in main_client
        async with meth(
      File "aiohttp/aiohttp/client.py", line 1141, in __aenter__
        self._resp = await self._coro
      File "aiohttp/aiohttp/client.py", line 560, in _request
        await resp.start(conn)
      File "aiohttp/aiohttp/client_reqrep.py", line 899, in start
        message, payload = await protocol.read()  # type: ignore[union-attr]
      File "aiohttp/aiohttp/streams.py", line 616, in read
        await self._waiter
    aiohttp.client_exceptions.ServerTimeoutError: Timeout on reading data from socket
    

    Python Version

    3.10.8
    

    aiohttp Version

    3.8.3
    

    multidict Version

    5.2.0
    

    yarl Version

    1.8.1
    

    OS

    Linux

    Related component

    Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by dtrifiro 3
  • Bump setuptools from 58.4.0 to 65.5.1 in /requirements

    Bump setuptools from 58.4.0 to 65.5.1 in /requirements

    Bumps setuptools from 58.4.0 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    • #3613: Fixed encoding errors in expand.StaticModule when system default encoding doesn't match expectations for source files.
    • #3617: Merge with pypa/distutils@6852b20 including fix for pypa/distutils#181.

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
    dependencies python 
    opened by dependabot[bot] 0
  • Bump wheel from 0.37.0 to 0.38.1 in /requirements

    Bump wheel from 0.37.0 to 0.38.1 in /requirements

    Bumps wheel from 0.37.0 to 0.38.1.

    Changelog

    Sourced from wheel's changelog.

    Release Notes

    UNRELEASED

    • Updated vendored packaging to 22.0

    0.38.4 (2022-11-09)

    • Fixed PKG-INFO conversion in bdist_wheel mangling UTF-8 header values in METADATA (PR by Anderson Bravalheri)

    0.38.3 (2022-11-08)

    • Fixed install failure when used with --no-binary, reported on Ubuntu 20.04, by removing setup_requires from setup.cfg

    0.38.2 (2022-11-05)

    • Fixed regression introduced in v0.38.1 which broke parsing of wheel file names with multiple platform tags

    0.38.1 (2022-11-04)

    • Removed install dependency on setuptools
    • The future-proof fix in 0.36.0 for converting PyPy's SOABI into a abi tag was faulty. Fixed so that future changes in the SOABI will not change the tag.

    0.38.0 (2022-10-21)

    • Dropped support for Python < 3.7
    • Updated vendored packaging to 21.3
    • Replaced all uses of distutils with setuptools
    • The handling of license_files (including glob patterns and default values) is now delegated to setuptools>=57.0.0 (#466). The package dependencies were updated to reflect this change.
    • Fixed potential DoS attack via the WHEEL_INFO_RE regular expression
    • Fixed ValueError: ZIP does not support timestamps before 1980 when using SOURCE_DATE_EPOCH=0 or when on-disk timestamps are earlier than 1980-01-01. Such timestamps are now changed to the minimum value before packaging.

    0.37.1 (2021-12-22)

    • Fixed wheel pack duplicating the WHEEL contents when the build number has changed (#415)
    • Fixed parsing of file names containing commas in RECORD (PR by Hood Chatham)

    0.37.0 (2021-08-09)

    • Added official Python 3.10 support
    • Updated vendored packaging library to v20.9

    ... (truncated)

    Commits
    • 6f1608d Created a new release
    • cf8f5ef Moved news item from PR #484 to its proper place
    • 9ec2016 Removed install dependency on setuptools (#483)
    • 747e1f6 Fixed PyPy SOABI parsing (#484)
    • 7627548 [pre-commit.ci] pre-commit autoupdate (#480)
    • 7b9e8e1 Test on Python 3.11 final
    • a04dfef Updated the pypi-publish action
    • 94bb62c Fixed docs not building due to code style changes
    • d635664 Updated the codecov action to the latest version
    • fcb94cd Updated version to match the release
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
    dependencies python 
    opened by dependabot[bot] 0
Releases(v3.8.3)
Owner
aio-libs
The set of asyncio-based libraries built with high quality
aio-libs
A very simple asynchronous wrapper that allows you to get access to the Oracle database in asyncio programs.

cx_Oracle_async A very simple asynchronous wrapper that allows you to get access to the Oracle database in asyncio programs. Easy to use , buy may not

null 36 Dec 21, 2022
Fast, asynchronous and elegant Python web framework.

Warning: This project is being completely re-written. If you're curious about the progress, reach me on Slack. Vibora is a fast, asynchronous and eleg

vibora.io 5.7k Jan 8, 2023
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed.

Tornado Web Server Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking ne

null 20.9k Jan 1, 2023
A micro web-framework using asyncio coroutines and chained middleware.

Growler master ' dev Growler is a web framework built atop asyncio, the asynchronous library described in PEP 3156 and added to the standard library i

null 687 Nov 27, 2022
FPS, fast pluggable server, is a framework designed to compose and run a web-server based on plugins.

FPS, fast pluggable server, is a framework designed to compose and run a web-server based on plugins. It is based on top of fastAPI, uvicorn, typer, and pluggy.

Adrien Delsalle 1 Nov 16, 2021
cirrina is an opinionated asynchronous web framework based on aiohttp

cirrina cirrina is an opinionated asynchronous web framework based on aiohttp. Features: HTTP Server Websocket Server JSON RPC Server Shared sessions

André Roth 32 Mar 5, 2022
An easy-to-use high-performance asynchronous web framework.

An easy-to-use high-performance asynchronous web framework.

Aber 264 Dec 31, 2022
An easy-to-use high-performance asynchronous web framework.

中文 | English 一个易用的高性能异步 web 框架。 Index.py 文档 Index.py 实现了 ASGI3 接口,并使用 Radix Tree 进行路由查找。是最快的 Python web 框架之一。一切特性都服务于快速开发高性能的 Web 服务。 大量正确的类型注释 灵活且高效的

Index.py 264 Dec 31, 2022
The Modern And Developer Centric Python Web Framework. Be sure to read the documentation and join the Slack channel questions: http://slack.masoniteproject.com

NOTE: Masonite 2.3 is no longer compatible with the masonite-cli tool. Please uninstall that by running pip uninstall masonite-cli. If you do not unin

Masonite 1.9k Jan 4, 2023
Daniel Vaz Gaspar 4k Jan 8, 2023
WebSocket and WAMP in Python for Twisted and asyncio

Autobahn|Python WebSocket & WAMP for Python on Twisted and asyncio. Quick Links: Source Code - Documentation - WebSocket Examples - WAMP Examples Comm

Crossbar.io 2.4k Jan 6, 2023
A familiar HTTP Service Framework for Python.

Responder: a familiar HTTP Service Framework for Python Powered by Starlette. That async declaration is optional. View documentation. This gets you a

Taoufik 3.6k Dec 27, 2022
CherryPy is a pythonic, object-oriented HTTP framework. https://docs.cherrypy.org/

Welcome to the GitHub repository of CherryPy! CherryPy is a pythonic, object-oriented HTTP framework. It allows building web applications in much the

CherryPy 1.6k Dec 29, 2022
Python AsyncIO data API to manage billions of resources

Introduction Please read the detailed docs This is the working project of the next generation Guillotina server based on asyncio. Dependencies Python

Plone Foundation 183 Nov 15, 2022
An abstract and extensible framework in python for building client SDKs and CLI tools for a RESTful API.

django-rest-client An abstract and extensible framework in python for building client SDKs and CLI tools for a RESTful API. Suitable for APIs made wit

Certego 4 Aug 25, 2022
Async Python 3.6+ web server/framework | Build fast. Run fast.

Sanic | Build fast. Run fast. Build Docs Package Support Stats Sanic is a Python 3.6+ web server and web framework that's written to go fast. It allow

Sanic Community Organization 16.7k Jan 8, 2023
Async Python 3.6+ web server/framework | Build fast. Run fast.

Sanic | Build fast. Run fast. Build Docs Package Support Stats Sanic is a Python 3.6+ web server and web framework that's written to go fast. It allow

Sanic Community Organization 16.7k Dec 28, 2022
Asita is a web application framework for python based on express-js framework.

Asita is a web application framework for python. It is designed to be easy to use and be more easy for javascript users to use python frameworks because it is based on express-js framework.

Mattéo 4 Nov 16, 2021
Pyrin is an application framework built on top of Flask micro-framework to make life easier for developers who want to develop an enterprise application using Flask

Pyrin A rich, fast, performant and easy to use application framework to build apps using Flask on top of it. Pyrin is an application framework built o

Mohamad Nobakht 10 Jan 25, 2022