Asynchronous HTTP client/server framework for asyncio and Python

Overview

Async http client/server framework

aiohttp logo


GitHub Actions status for master branch codecov.io status for master branch Latest PyPI package version Downloads count Latest Read The Docs Discourse status Chat on Gitter

Key Features

  • Supports both client and server side of HTTP protocol.
  • Supports both client and server Web-Sockets out-of-the-box and avoids Callback Hell.
  • Provides Web-server with middlewares and plugable routing.

Getting started

Client

To get something from the web:

import aiohttp
import asyncio

async def main():

    async with aiohttp.ClientSession() as session:
        async with session.get('http://python.org') as response:

            print("Status:", response.status)
            print("Content-type:", response.headers['content-type'])

            html = await response.text()
            print("Body:", html[:15], "...")

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

This prints:

Status: 200
Content-type: text/html; charset=utf-8
Body: <!doctype html> ...

Coming from requests ? Read why we need so many lines.

Server

An example using a simple server:

# examples/server_simple.py
from aiohttp import web

async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

async def wshandle(request):
    ws = web.WebSocketResponse()
    await ws.prepare(request)

    async for msg in ws:
        if msg.type == web.WSMsgType.text:
            await ws.send_str("Hello, {}".format(msg.data))
        elif msg.type == web.WSMsgType.binary:
            await ws.send_bytes(msg.data)
        elif msg.type == web.WSMsgType.close:
            break

    return ws


app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/echo', wshandle),
                web.get('/{name}', handle)])

if __name__ == '__main__':
    web.run_app(app)

Documentation

https://aiohttp.readthedocs.io/

Demos

https://github.com/aio-libs/aiohttp-demos

External links

Feel free to make a Pull Request for adding your link to these pages!

Communication channels

aio-libs discourse group: https://aio-libs.discourse.group

gitter chat https://gitter.im/aio-libs/Lobby

We support Stack Overflow. Please add aiohttp tag to your question there.

Requirements

Optionally you may install the cChardet and aiodns libraries (highly recommended for sake of speed).

License

aiohttp is offered under the Apache 2 license.

Keepsafe

The aiohttp community would like to thank Keepsafe (https://www.getkeepsafe.com) for its support in the early days of the project.

Source code

The latest developer version is available in a GitHub repository: https://github.com/aio-libs/aiohttp

Benchmarks

If you are interested in efficiency, the AsyncIO community maintains a list of benchmarks on the official wiki: https://github.com/python/asyncio/wiki/Benchmarks

Comments
  • aiohttp 3.0 release

    aiohttp 3.0 release

    I'd like to name the next release 3.0 3.0 means the major.

    Let's drop Python 3.4 and use async/await everywhere. Another important question is what 3.5 release to cut off?

    Honestly I want to support 3.5.3+ only: the release fixes well known ugly bug for asyncio.get_event_loop(). Debian stable has shipped with Python 3.5.3, not sure about RHEL. All other distributions with faster release cycle support 3.5.3 too at least or event 3.6.

    The transition should not be done at once.

    1. Let's pin minimal required Python version first in setup.py.
    2. Translate test suite to use async/await.
    3. Modify aiohttp itself to use new syntax.

    Every bullet except number one could be done in a long series of PRs, part by part.

    BTW at the end I hope to get minor performance increase :)

    meta outdated 
    opened by asvetlov 62
  • Fails to build on Python 3.11 - longintrepr.h: No such file or directory

    Fails to build on Python 3.11 - longintrepr.h: No such file or directory

    Describe the bug

    Because of https://github.com/python/cpython/pull/28968, cython fixed it in https://github.com/cython/cython/pull/4428 and is released with 0.29.5

    To Reproduce

    pip install aiohttp

    Expected behavior

    To build the wheel.

    Logs/tracebacks

    Failed to build aiohttp yarl
      error: subprocess-exited-with-error
    
      × Building wheel for aiohttp (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> [31 lines of output]
          *********************
          * Accelerated build *
          *********************
          running bdist_wheel
          running build
          running build_py
          running egg_info
          writing aiohttp.egg-info/PKG-INFO
          writing dependency_links to aiohttp.egg-info/dependency_links.txt
          writing requirements to aiohttp.egg-info/requires.txt
          writing top-level names to aiohttp.egg-info/top_level.txt
          reading manifest file 'aiohttp.egg-info/SOURCES.txt'
          reading manifest template 'MANIFEST.in'
          warning: no files found matching 'aiohttp' anywhere in distribution
          warning: no previously-included files matching '*.pyc' found anywhere in distribution
          warning: no previously-included files matching '*.pyd' found anywhere in distribution
          warning: no previously-included files matching '*.so' found anywhere in distribution
          warning: no previously-included files matching '*.lib' found anywhere in distribution
          warning: no previously-included files matching '*.dll' found anywhere in distribution
          warning: no previously-included files matching '*.a' found anywhere in distribution
          warning: no previously-included files matching '*.obj' found anywhere in distribution
          warning: no previously-included files found matching 'aiohttp/*.html'
          no previously-included directories found matching 'docs/_build'
          adding license file 'LICENSE.txt'
          running build_ext
          building 'aiohttp._websocket' extension
          aiohttp/_websocket.c:198:12: fatal error: longintrepr.h: No such file or directory
            198 |   #include "longintrepr.h"
                |            ^~~~~~~~~~~~~~~
          compilation terminated.
          error: command '/usr/bin/gcc' failed with exit code 1
          [end of output]
    
    

    Python Version

    3.11
    

    aiohttp Version

    latest
    

    multidict Version

    n/a
    

    yarl Version

    latest
    

    OS

    Any

    Related component

    Server, Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by gaborbernat 53
  • aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    Long story short

    The CA file is working with cURL, Python Requests, but not aiohttp, when using SSL_CERT_DIR and or SSL_CERT_FILE environment variables.

    Our environment uses its own CA root used to decode/encode HTTPS API requests/responses to provide a short lived cache to prevent excessing external requests.

    The environment has the following set:

    $ (set -o posix; set) | egrep 'SSL|_CA'
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    The ca.pem can be successfully used by cURL - with both a positive and negative test shown:

    curl --cacert /home/creslin/poo/freqcache/cert/ca.pem https://api.binance.com/api/v1/time
    {"serverTime":1533719563552}
    
    curl --cacert /home/creslin/NODIRHERE/ca.pem https://api.binance.com/api/v1/time
    curl: (77) error setting certificate verify locations:
      CAfile: /home/creslin/NODIRHERE/ca.pem
      CApath: /etc/ssl/certs
    

    A simple python requests script req.py also works as expected, positive and negative tests

    cat req.py 
    import requests
    req=requests.get('https://api.binance.com/api/v1/time', verify=True)
    print(req.content)
    
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    
    python3 req.py 
    b'{"serverTime":1533720141278}'
    
    CURL_CA_BUNDLE=/
    REQUESTS_CA_BUNDLE=/
    SSL_CERT_DIR=/
    SSL_CERT_FILE=/
    
    python3 req.py 
    Traceback (most recent call last):
      File "/home/creslin/freqt .......
    
    ..... ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    

    Using aysnc/aiohttp [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833) is returned always. The environment settings pointing to the ca.pem shown to work for both cURL and requests are seemingly ignored

    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    I have the test script a.py as

    cat a.py 
    import aiohttp
    import ssl
    import asyncio
    import requests
    
    print("\n requests.certs.where", requests.certs.where())
    print("\n ssl version", ssl.OPENSSL_VERSION)
    print("\n ssl Paths", ssl.get_default_verify_paths() ,"\n")
    f = open('/home/creslin/poo/freqcache/cert/ca.crt', 'r') # check perms are ok
    f.close()
    
    async def main():
        session = aiohttp.ClientSession()
        async with session.get('https://api.binance.com/api/v1/time') as response:
            print(await response.text())
        await session.close()
    
    if __name__ == "__main__":
        loop = asyncio.get_event_loop()
        loop.run_until_complete(main())
    

    Which will always produce the failure - output in full:

     requests.certs.where /home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/certifi/cacert.pem
    
     ssl version OpenSSL 1.1.0g  2 Nov 2017
    
     ssl Paths DefaultVerifyPaths(cafile=None, capath='/home/creslin/poo/freqcache/cert/', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/lib/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/lib/ssl/certs') 
    
    Traceback (most recent call last):
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 822, in _wrap_create_connection
        return await self._loop.create_connection(*args, **kwargs)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 804, in create_connection
        sock, protocol_factory, ssl, server_hostname)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 830, in _create_connection_transport
        yield from waiter
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 505, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 201, in feed_ssldata
        self._sslobj.do_handshake()
      File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
        self._sslobj.do_handshake()
    ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "a.py", line 20, in <module>
        loop.run_until_complete(main())
      File "/usr/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
        return future.result()
      File "a.py", line 14, in main
        async with session.get('https://api.binance.com/api/v1/time') as response:
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 843, in __aenter__
        self._resp = await self._coro
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 366, in _request
        timeout=timeout
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 445, in connect
        proto = await self._create_connection(req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 757, in _create_connection
        req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 879, in _create_direct_connection
        raise last_exc
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 862, in _create_direct_connection
        req=req, client_error=client_error)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 827, in _wrap_create_connection
        raise ClientConnectorSSLError(req.connection_key, exc) from exc
    aiohttp.client_exceptions.ClientConnectorSSLError: Cannot connect to host api.binance.com:443 ssl:None [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)]
    Unclosed client session
    client_session: <aiohttp.client.ClientSession object at 0x7fa6bf9de898>
    
    

    Expected behaviour

    aiohttp does not reject server certificate.

    Actual behaviour

    SSL verification error

    Steps to reproduce

    use own CA root certificate to trust HTTPS server.

    Your environment

    Ubuntu 18.04 Python 3.6.5 ssl version OpenSSL 1.1.0g 2 Nov 2017 Name: aiohttp Version: 3.3.2 Name: requests Version: 2.19.1 Name: certifi Version: 2018.4.16

    invalid question outdated 
    opened by creslinux 51
  • ASGI support?

    ASGI support?

    Long story short

    Currently, most of asyncio based web frameworks embeds http server into the web framework. If http/ws servers would use ASGI standard, then it would be possible to run your app using different http/ws servers.

    That is why I think it is important for aiohttp to add ASGI support.

    Expected behaviour

    I would like to write aiohttp app:

    from aiohttp import web
    
    async def hello(request):
        return web.Response(text="Hello world!")
    
    app = web.Application()
    app.add_routes([web.get('/', hello)])
    

    And run it with any ASGI compatible server:

    > daphne app:app
    
    > http -b get :8080/
    Hello world!
    

    Actual behaviour

    > daphne app:app
    2018-04-02 12:48:51,097 ERROR    Traceback (most recent call last):
      File "daphne/http_protocol.py", line 158, in process
        "server": self.server_addr,
      File "daphne/server.py", line 184, in create_application
        application_instance = self.application(scope=scope)
    TypeError: __call__() got an unexpected keyword argument 'scope'
    
    127.0.0.1:41828 - - [02/Apr/2018:12:48:51] "GET /" 500 452
    
    > http -b get :8080/
    <html>
      <head>
        <title>500 Internal Server Error</title>
      </head>
      <body>
        <h1>500 Internal Server Error</h1>
        <p>Daphne HTTP processing error</p>
        <footer>Daphne</footer>
      </body>
    </html>
    

    ASGI resources

    https://github.com/django/asgiref/blob/master/specs/asgi.rst - ASGI specification.

    https://github.com/django/asgiref/blob/master/specs/www.rst - ASGI-HTTP and ASGI-WebSocket protocol specifications.

    Example ASGI app:

    import json
    
    def app(scope):
        async def channel(receive, send):
            message = await receive()
    
            if scope['method'] == 'POST':
                response = message
            else:
                response = scope
    
            await send({
                'type': 'http.response.start',
                'status': 200,
                'headers': [
                    [b'Content-Type', b'application/json'],
                ],
            })
            await send({
                'type': 'http.response.body',
                'body': json.dumps(response, default=bytes.decode).encode(),
            })
            await send({
                'type': 'http.disconnect',
            })
        return channel
    
    > daphne app:app
    2018-03-31 22:28:10,823 INFO     Starting server at tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,824 INFO     HTTP/2 support enabled
    2018-03-31 22:28:10,824 INFO     Configuring endpoint tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,825 INFO     Listening on TCP address 127.0.0.1:8000
    127.0.0.1:43436 - - [31/Mar/2018:22:28:17] "GET /" 200 347
    127.0.0.1:43440 - - [31/Mar/2018:22:28:22] "POST /" 200 43
    127.0.0.1:43446 - - [31/Mar/2018:22:28:42] "POST /" 200 54
    
    > http -b get :8000/
    {
        "type": "http"
        "http_version": "1.1",
        "method": "GET",
        "path": "/",
        "query_string": "",
        "root_path": "",
        "scheme": "http",
        "headers": [
            ["host", "localhost:8000"],
            ["user-agent", "HTTPie/0.9.9"],
            ["accept-encoding", "gzip, deflate"],
            ["accept", "*/*"],
            ["connection", "keep-alive"]
        ],
        "client": ["127.0.0.1", 43360],
        "server": ["127.0.0.1", 8000],
    }
    
    > http -b -f post :8000/ foo=bar
    {
        "body": "foo=bar",
        "type": "http.request"
    }
    
    > http -b -j post :8000/ foo=bar
    {
        "body": "{\"foo\": \"bar\"}",
        "type": "http.request"
    }
    
    outdated 
    opened by sirex 44
  • Degrading performance over time...

    Degrading performance over time...

    I wrote a quick AsyncScraper class below:

    import logging, datetime, time
    import aiohttp
    import asyncio
    import uvloop
    
    # asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
    
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)
    logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    logger.addHandler(logging.StreamHandler())
    
    class AsyncScraper(object):
    	headers = {"User-Agent" : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'}
    	def __init__(self, max_connections=1000, timeout=10):
    		self.max_connections = max_connections
    		self.timeout = timeout
    		
    	async def get_response(self, url, session):
    		with aiohttp.Timeout(timeout=self.timeout):
    			async with session.get(url, allow_redirects=True, headers=AsyncScraper.headers, timeout=self.timeout) as response:
    				try:
    					content = await response.text()
    					return {'error': "", 'status': response.status, 'url':url, 'content': content, 'timestamp': str(datetime.datetime.utcnow())}
    				except Exception as err:
    					return {'error': err, 'status': "", 'url':url, 'content': "", 'timestamp': str(datetime.datetime.utcnow())}
    				finally:
    					response.close()
    
    	def get_all(self, urls):
    		loop = asyncio.get_event_loop()
    		with aiohttp.ClientSession(loop=loop, connector=aiohttp.TCPConnector(keepalive_timeout=10, limit=self.max_connections, verify_ssl=False)) as session:
    			tasks = asyncio.gather(*[self.get_response(url, session) for url in urls], return_exceptions=True)
    			results = loop.run_until_complete(tasks)
    			return results
    
    def chunks(l, n):
    	for i in range(0, len(l), n):
    		yield l[i:i + n]
    
    def process_urls(urls, chunk_size=1000):
    	scraper = AsyncScraper()
    
    	results = []
    	t0 = time.time()
    	for i, urls_chunk in enumerate(chunks(sorted(set(urls)), chunk_size)):
    		t1 = time.time()
    		result = scraper.get_all(urls_chunk)
    		success_size = len( [_ for _ in result if ((isinstance(_, Exception) is False) and (_['status']==200)) ] )
    		results.extend(result)
    		logger.debug("batch {} => success: {} => iteration time: {}s =>  total time: {}s => total processed {}".format(i+1, success_size, time.time()-t1, time.time()-t0, len(results)))
    	return results
    

    and I've run into two main issues:

    1. If I pass in a flat list of URLs, say 100k (via the get_all method), I get flooded with errors:

      2017-04-17 15:50:53,541 - asyncio - ERROR - Fatal error on SSL transport protocol: <asyncio.sslproto.SSLProtocol object at 0x10d5439b0> transport: <_SelectorSocketTransport closing fd=612 read=idle write=<idle, bufsize=0>> Traceback (most recent call last): File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 639, in _process_write_backlog ssldata = self._sslpipe.shutdown(self._finalize) File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 151, in shutdown raise RuntimeError('shutdown in progress') RuntimeError: shutdown in progress

    2. I then batched the URLs in chunks of 1,000, and timed the response between batches. And I was clearly able to measure the performance decay over time (see below). Moreover, the number of errors increased over time... What am I doing wrong?

      iteration 0 done in 16.991s iteration 1 done in 39.376s iteration 2 done in 35.656s iteration 3 done in 19.716s iteration 4 done in 29.331s iteration 5 done in 19.708s iteration 6 done in 19.572s iteration 7 done in 29.907s iteration 8 done in 23.379s iteration 9 done in 21.762s iteration 10 done in 22.091s iteration 11 done in 22.940s iteration 12 done in 31.285s iteration 13 done in 24.549s iteration 14 done in 26.297s iteration 15 done in 23.816s iteration 16 done in 29.094s iteration 17 done in 24.885s iteration 18 done in 26.456s iteration 19 done in 27.412s iteration 20 done in 29.969s iteration 21 done in 28.503s iteration 22 done in 28.699s iteration 23 done in 31.570s iteration 26 done in 31.898s iteration 27 done in 33.553s iteration 28 done in 34.022s iteration 29 done in 33.866s iteration 30 done in 36.351s iteration 31 done in 40.060s iteration 32 done in 35.523s iteration 33 done in 36.607s iteration 34 done in 36.325s iteration 35 done in 38.425s iteration 36 done in 39.106s iteration 37 done in 38.972s iteration 38 done in 39.845s iteration 39 done in 40.393s iteration 40 done in 40.734s iteration 41 done in 47.799s iteration 42 done in 43.070s iteration 43 done in 43.365s iteration 44 done in 42.081s iteration 45 done in 44.118s iteration 46 done in 44.955s iteration 47 done in 45.400s iteration 48 done in 45.987s iteration 49 done in 46.041s iteration 50 done in 45.899s iteration 51 done in 49.008s iteration 52 done in 49.544s iteration 53 done in 55.432s iteration 54 done in 52.590s iteration 55 done in 50.185s iteration 56 done in 52.858s iteration 57 done in 52.698s iteration 58 done in 53.048s iteration 59 done in 54.120s iteration 60 done in 54.151s iteration 61 done in 55.465s iteration 62 done in 56.889s iteration 63 done in 56.967s iteration 64 done in 57.690s iteration 65 done in 57.052s iteration 66 done in 67.214s iteration 67 done in 58.457s iteration 68 done in 60.882s iteration 69 done in 58.440s iteration 70 done in 60.755s iteration 71 done in 58.043s iteration 72 done in 65.076s iteration 73 done in 63.371s iteration 74 done in 62.800s iteration 75 done in 62.419s iteration 76 done in 61.376s iteration 77 done in 63.164s iteration 78 done in 65.443s iteration 79 done in 64.616s iteration 80 done in 69.544s iteration 81 done in 68.226s iteration 82 done in 78.050s iteration 83 done in 67.871s iteration 84 done in 69.780s iteration 85 done in 67.812s iteration 86 done in 68.895s iteration 87 done in 71.086s iteration 88 done in 68.809s iteration 89 done in 70.945s iteration 90 done in 72.760s iteration 91 done in 71.773s iteration 92 done in 72.522s

    The time here corresponds to the iteration time to process 1,000 URLs. Please advise. Thanks

    outdated 
    opened by vgoklani 44
  • Memory leak in request

    Memory leak in request

    Hi all,

    Since I upgraded to 0.14.4 (from 0.9.0) I am experiencing memory leaks in a Dropbox-API longpoller. It is a single process that spawns a few thousands of greenlets. Each greenlet performs a request(), that blocks for 30 seconds, then parses the response and dies. Then a new greenlet is spawned.

    I am running on python 3.4.0, Ubuntu 14.04. I use the connection pool feature, passing the same connector singleton to each .request() call.

    I played with tracemalloc, dumping a <N>.dump stat file every minute and found out that the response parser instances keep increasing in number (look at the third line of each stat)

    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('5.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130540 count=82650>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2176408 count=4437>]
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('6.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2199704 count=4463>]
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# 
    root@9330490eafc9:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('7.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2231064 count=4498>]
    

    tracemalloc reports this stack trace:

    python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('3.dump').filter_traces([tracemalloc.Filter(True, '*aiohttp/parsers.py')]).statistics('traceback')[0].traceback.format())"
    ['  File "/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py", line '
     '198',
     '    p = parser(output, self._buffer)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 633',
     '    httpstream = self._reader.set_parser(self._response_parser)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 108',
     '    yield from resp.start(conn, read_until_eof)',
     '  File "/src/xxxxxx/main.py", line 70',
     '    connector=LONGPOLL_CONNECTOR',
    

    Looks like there is something keeping alive those parsers....

    Using force_close=True on the connector makes no difference.

    Then I tried calling gc.collect() after every single request, and is going much better ~~but the leak has not~~ the leak has disappeared completely-. This means (maybe is an unrelated issue) the library creates more reference cycles thant the CGC can handle.

    It my well be my own bug, or maybe something to do with python 3.4.0 itself. I'm still digging into it.

    outdated 
    opened by mpaolini 44
  • Add json_response funciton

    Add json_response funciton

    Should be derived from aiohttp.web.Response.

    Constructor signature is: def __init__(self, data, *, status=200, reason=None, headers=None)

    Should pack data arg as json.dumps() and set content type to application/json.

    People forget to specify proper content type on sending json data.

    good first issue outdated 
    opened by asvetlov 42
  • Added a configuration flag for enable request task handler cancelling when client connection closing.

    Added a configuration flag for enable request task handler cancelling when client connection closing.

    Related to #6719 #6727. Added a configuration flag for enable request task handler cancelling when client connection closing.

    After changes in version 3.8.3, there is no longer any way to enable this behaviour. In our services, we want to handle protocol-level errors, for example for canceling the execution of a heavy query in the DBMS if the user's connection is broken.

    Now I created this PR in order to discuss my solution, of course if I did everything well I will add tests changelog, etc.

    I guess this breakdown can be solved using the configuration flag that is passed to the Server instance.

    Of course AppRunner and SiteRunner can pass this through **kwargs too.

    Related issue number #6719

    Checklist

    • [ ] I think the code is well written
    • [ ] Unit tests for the changes exist
    • [ ] Documentation reflects the changes
    • [ ] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [ ] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided backport-3.9 
    opened by mosquito 41
  • ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    The following very simple aiohttp client:

    #!/usr/bin/env python3
    
    import aiohttp
    import asyncio
    
    async def fetch(session, url):
        async with session.get(url) as response:
            print("%s launched" % url)
            return response
    
    async def main():
        async with aiohttp.ClientSession() as session:
            python = await fetch(session, 'https://python.org')
            print("Python: %s" % python.status)
            
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    produces the following exception:

    https://python.org launched
    Python: 200
    SSL error in data received
    protocol: <asyncio.sslproto.SSLProtocol object at 0x7fdec8d42208>
    transport: <_SelectorSocketTransport fd=8 read=polling write=<idle, bufsize=0>>
    Traceback (most recent call last):
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
        self._sslobj.unwrap()
      File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
        return self._sslobj.shutdown()
    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)
    

    I noticed bug #3477 but it is closed and the problem is still there (I have the latest pip version).

    % python --version
    Python 3.7.2
    
    % pip show aiohttp
    Name: aiohttp
    Version: 3.5.4
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: Nikolay Kim
    Author-email: [email protected]
    License: Apache 2
    Location: /usr/lib/python3.7/site-packages
    Requires: chardet, multidict, attrs, async-timeout, yarl
    Required-by: 
    
    bug 
    opened by bortzmeyer 40
  • replace http parser?

    replace http parser?

    @asvetlov should we replace http parser with https://github.com/MagicStack/httptools? looks good http://magic.io/blog/uvloop-make-python-networking-great-again/

    outdated 
    opened by fafhrd91 38
  • docs syntax highlighting missing

    docs syntax highlighting missing

    starting with 0.18.0, the syntax highlighting for all but the first code block on any page vanished.

    it seems some change on readthedocs, docutils or sphinx made .. highlight:: python not work properly.

    and index.rst doesn’t have that directive, so the magic for autodetecting language or whatever made it work before is also gone.

    outdated 
    opened by flying-sheep 38
  • Client streaming uploads add `content-length` header note to avoid streaming upload failure

    Client streaming uploads add `content-length` header note to avoid streaming upload failure

    Describe the bug

    When I refer the doc(https://docs.aiohttp.org/en/stable/client_quickstart.html#streaming-uploads) to stream upload my data, I always failed. When I add extra request header Content-Length, the failure is gone. So I think the doc should note the problem.

    Additional context will show my code.

    To Reproduce

    1. Get one response with client
    2. Add extra data to response data
    3. Use async generator to send reqeust data

    Expected behavior

    When I miss Content-Length header, the streaming upload will fail.

    Logs/tracebacks

    No
    

    Python Version

    $ python --version
    Python 3.10.6
    

    aiohttp Version

    $ python -m pip show aiohttp
    Name: aiohttp
    Version: 3.8.3
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: 
    Author-email: 
    License: Apache 2
    Location: /home/lzy/Git/Work/cache_node_ngx_waf/.venv/lib/python3.10/site-packages
    Requires: aiosignal, async-timeout, attrs, charset-normalizer, frozenlist, multidict, yarl
    Required-by:
    

    multidict Version

    $ python -m pip show multidict
    Name: multidict
    Version: 6.0.3
    Summary: multidict implementation
    Home-page: https://github.com/aio-libs/multidict
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: Apache 2
    Location: /home/lzy/Git/Work/cache_node_ngx_waf/.venv/lib/python3.10/site-packages
    Requires: 
    Required-by: aiohttp, yarl
    

    yarl Version

    $ python -m pip show yarl
    Name: yarl
    Version: 1.8.2
    Summary: Yet another URL library
    Home-page: https://github.com/aio-libs/yarl/
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: Apache 2
    Location: /home/lzy/Git/Work/cache_node_ngx_waf/.venv/lib/python3.10/site-packages
    Requires: idna, multidict
    Required-by: aiohttp
    

    OS

    No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 11 (bullseye) Release: 11 Codename: bullseye

    Related component

    Client

    Additional context

    Below code is my case.

    async with self.session.get(url, headers=headers) as resp:
        resp.raise_for_status()
    
        content = io.BytesIO()
        version: HttpVersion = resp.version  # type: ignore
        content.write(
            f"{resp.url.scheme.upper()}/{version.major}.{version.minor} {resp.status} {resp.reason}\r\n".encode(
                "ascii"
            )
        )
        for h in resp.headers:
            content.write(f"{h}: {resp.headers[h]}\r\n".encode("ascii"))
        content.write(b"\r\n")
        logging.debug(
            f"header length: {content.tell()}, header content: {content.getvalue().decode('utf-8')}"
        )
    
        body_length = content.tell() + int(
            resp.headers.get(HTTP_HEADER_NAME_CONTENT_LENGTH, "0")
        )
    
        async def read_response():
            yield content.getvalue()
            content.close()
    
            length = 0
            async for data, eof in resp.content.iter_chunks():
                length += len(data)
                logging.debug(
                    f"trunk data length: {len(data)}, total length: {length}, is eof: {eof}"
                )
                yield data
    
        await self.request(
            method="push",
            normal_status_codes=[200, 201],
            data=read_response(),
    
            # ----------------------------------------------------------------
            # ******* When I miss the header, the request will fail *******
            headers={HTTP_HEADER_NAME_CONTENT_LENGTH: str(body_length)},
        )
    
    

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by li1234yun 0
  • Client ignoring cookies when being set with confliciting expiration dates

    Client ignoring cookies when being set with confliciting expiration dates

    Describe the bug

    When receiving a response with multiple Set-Cookie headers each containing the same cookie being set with different expiration dates, the client seems to ignore it if one of such expiration dates is in the past.

    To Reproduce

    Having a server that gives a response like:

    HTTP/1.1 200 OK
    Content-Length: 0
    Content-Type: application/octet-stream
    Date: Thu, 05 Jan 2023 21:57:14 GMT
    Server: Python/3.10 aiohttp/3.8.1
    Set-Cookie: Foo=bar; Secure; HttpOnly
    Set-Cookie: Foo=; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Secure; HttpOnly
    

    And a client like:

        async with aiohttp.ClientSession() as client:
            await client.get('http://localhost:8070')
            print(client.cookie_jar._cookies)
    

    We can see it outputs an empty cookiejar

    defaultdict(<class 'http.cookies.SimpleCookie'>, {'localhost': <SimpleCookie: >})
    

    Expected behavior

    While I couldn't find what the "correct" behaviour should be in RFC 6265, this issue was detected while scraping a working website that does this, and the browser stores the cookie successfully. I was also able to validate that this is requests' behaviour too.

    Logs/tracebacks

    N/A
    

    Python Version

    $ python --version
    Python 3.10.8
    

    aiohttp Version

    $ python -m pip show aiohttp
    Name: aiohttp
    Version: 3.8.3
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: 
    Author-email: 
    License: Apache 2
    Location: /home/eduardo/env/py3/lib/python3.10/site-packages
    Requires: frozenlist, attrs, charset-normalizer, aiosignal, multidict, yarl, async-timeout
    Required-by:
    

    multidict Version

    $ python -m pip show multidict
    Name: multidict
    Version: 5.2.0
    Summary: multidict implementation
    Home-page: https://github.com/aio-libs/multidict
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: Apache 2
    Location: /home/eduardo/env/py3/lib/python3.10/site-packages
    Requires: 
    Required-by: yarl, aiohttp
    

    yarl Version

    $ python -m pip show yarl
    Name: yarl
    Version: 1.7.2
    Summary: Yet another URL library
    Home-page: https://github.com/aio-libs/yarl/
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: Apache 2
    Location: /home/eduardo/env/py3/lib/python3.10/site-packages
    Requires: idna, multidict
    Required-by: aiohttp
    

    OS

    Arch Linux

    Related component

    Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by cinemascop89 1
  • Decompressing concatenated gzip

    Decompressing concatenated gzip

    Describe the bug

    Recently, I have been consuming a proprietary REST API that returns gzip-encoded text data.

    I discovered that the response was different from the response of the library requests or Postman. After a long research, I discovered that the REST API is sending concatenated gzip data.

    It seems that decompressing concatenated gzip data requires special treatment of zlib.decompressobj.unused_data (for example, check this answer on StackOverflow).

    I have confirmed that a similar implementation is found in urllib3 (which is used by the library requests), where unused data is checked for further decompression.

    I think that this is where aiohttp decompresses the gzip data. The unused_data is not handled in any way. I tried changing that line of aiohttp code to something like this:

    
    ret = self.decompressor.decompress(chunk)
    while self.decompressor.unused_data:
        chunk = self.decompressor.unused_data
        self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)
        ret += self.decompressor.decompress(chunk)
    
    chunk = ret
    

    and I was able to reproduce the response of the library requests.

    I have labeled this issue as bug, since I believe that the desired behavior is the same as in the library requests. If this treatment of gzip data was intentional, is there a simple way to process concatenated gzip data? Currently, aiohttp only returns a fragment of the decompressed response with await response.text().

    To Reproduce

    Sorry, I cannot offer a way to reproduce because I am using a propietary REST API.

    Expected behavior

    All the concatenated gzip data should be decompressed and concatenated as requests does here.

    Logs/tracebacks

    No logs.
    

    Python Version

    Python 3.8.10
    

    aiohttp Version

    aiohttp 3.8.3
    

    multidict Version

    multidict 6.0.4
    

    yarl Version

    yarl 1.8.2
    

    OS

    Windows 10

    Related component

    Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
    bug 
    opened by davenza 1
  • remove import aliases

    remove import aliases

    What do these changes do?

    Clean up imports by removing import aliases

    Are there changes in behavior for the user?

    No

    Related issue number

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by dtrifiro 2
  • Add response proxy headers to ClientResponse

    Add response proxy headers to ClientResponse

    What do these changes do?

    Add to the ClientResponse the headers and raw headers of the underlying CONNECT call in a proxied HTTPS request.

    The headers and raw headers from the response are added to the ResponseHandler and later when processing the request are added to the response.

    Are there changes in behavior for the user?

    No.

    Related issue number

    Closes #6078

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by galaxyfeeder 2
  • client: fix chunked upload timeouts with sock_read

    client: fix chunked upload timeouts with sock_read

    What do these changes do?

    Prevent the timeout callback from firing by calling data_received after each chunk has been written: this reschedules the timeout and prevents the protocol from timing out while data is being sent.

    Are there changes in behavior for the user?

    No

    Related issue number

    #7149

    Checklist

    • [x] I think the code is well written
    • [x] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [x] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
    bot:chronographer:provided 
    opened by dtrifiro 0
Releases(v3.8.3)
Owner
aio-libs
The set of asyncio-based libraries built with high quality
aio-libs
Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Paweł Piotr Przeradowski 8.6k Jan 4, 2023
EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, parallel or even single requests

EasyRequests EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, paralle

Avi 1 Jan 27, 2022
Asynchronous Python HTTP Requests for Humans using Futures

Asynchronous Python HTTP Requests for Humans Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the ba

Ross McFarland 2k Dec 30, 2022
Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Introduction httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries. HTTP and HTTPS HTTP

null 457 Dec 10, 2022
An interactive command-line HTTP and API testing client built on top of HTTPie featuring autocomplete, syntax highlighting, and more. https://twitter.com/httpie

HTTP Prompt HTTP Prompt is an interactive command-line HTTP client featuring autocomplete and syntax highlighting, built on HTTPie and prompt_toolkit.

HTTPie 8.6k Dec 31, 2022
A next generation HTTP client for Python. 🦋

HTTPX - A next-generation HTTP client for Python. HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support

Encode 9.8k Jan 5, 2023
Python requests like API built on top of Twisted's HTTP client.

treq: High-level Twisted HTTP Client API treq is an HTTP library inspired by requests but written on top of Twisted's Agents. It provides a simple, hi

Twisted Matrix Labs 553 Dec 18, 2022
As easy as /aitch-tee-tee-pie/ 🥧 Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie

HTTPie: human-friendly CLI HTTP client for the API era HTTPie (pronounced aitch-tee-tee-pie) is a command-line HTTP client. Its goal is to make CLI in

HTTPie 25.4k Jan 1, 2023
A minimal HTTP client. ⚙️

HTTP Core Do one thing, and do it well. The HTTP Core package provides a minimal low-level HTTP client, which does one thing only. Sending HTTP reques

Encode 306 Dec 27, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Dec 29, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Jan 2, 2023
🔄 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

?? ?? Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

Hackers and Slackers 15 Dec 12, 2022
A Python obfuscator using HTTP Requests and Hastebin.

?? Jawbreaker ?? Jawbreaker is a Python obfuscator written in Python3, using double encoding in base16, base32, base64, HTTP requests and a Hastebin-l

Billy 50 Sep 28, 2022
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once

pathprober Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once. Purpose Brute-forcing webs

NFA 41 Jul 6, 2022
HTTP/2 for Python.

Hyper: HTTP/2 Client for Python This project is no longer maintained! Please use an alternative, such as HTTPX or others. We will not publish further

Hyper 1k Dec 23, 2022
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (>=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Benoit Chesneau 334 Dec 24, 2022
Python package for caching HTTP response based on etag

Etag cache implementation for HTTP requests, to save request bandwidth for a non-modified response. Returns high-speed accessed dictionary data as cache.

Rakesh R 2 Apr 27, 2022
Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

null 7 Nov 5, 2022
A simple, yet elegant HTTP library.

Requests Requests is a simple, yet elegant HTTP library. >>> import requests >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')

Python Software Foundation 48.8k Jan 5, 2023