Asynchronous Python HTTP Requests for Humans using Futures

Overview

Asynchronous Python HTTP Requests for Humans

https://travis-ci.org/ross/requests-futures.png?branch=master

Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the backport for prior versions of python.

The additional API and changes are minimal and strives to avoid surprises.

The following synchronous code:

from requests import Session

session = Session()
# first requests starts and blocks until finished
response_one = session.get('http://httpbin.org/get')
# second request starts once first is finished
response_two = session.get('http://httpbin.org/get?foo=bar')
# both requests are complete
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

Can be translated to make use of futures, and thus be asynchronous by creating a FuturesSession and catching the returned Future in place of Response. The Response can be retrieved by calling the result method on the Future:

from requests_futures.sessions import FuturesSession

session = FuturesSession()
# first request is started in background
future_one = session.get('http://httpbin.org/get')
# second requests is started immediately
future_two = session.get('http://httpbin.org/get?foo=bar')
# wait for the first request to complete, if it hasn't already
response_one = future_one.result()
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
# wait for the second request to complete, if it hasn't already
response_two = future_two.result()
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)

By default a ThreadPoolExecutor is created with 8 workers. If you would like to adjust that value or share a executor across multiple sessions you can provide one to the FuturesSession constructor.

from concurrent.futures import ThreadPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ThreadPoolExecutor(max_workers=10))
# ...

As a shortcut in case of just increasing workers number you can pass max_workers straight to the FuturesSession constructor:

from requests_futures.sessions import FuturesSession
session = FuturesSession(max_workers=10)

FutureSession will use an existing session object if supplied:

from requests import session
from requests_futures.sessions import FuturesSession
my_session = session()
future_session = FuturesSession(session=my_session)

That's it. The api of requests.Session is preserved without any modifications beyond returning a Future rather than Response. As with all futures exceptions are shifted (thrown) to the future.result() call so try/except blocks should be moved there.

Tying extra information to the request/response

The most common piece of information needed is the URL of the request. This can be accessed without any extra steps using the request property of the response object.

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[session.get(f'http://httpbin.org/get?{i}') for i in range(3)]

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'url': resp.request.url,
        'content': resp.json(),
    })

There are situations in which you may want to tie additional information to a request/response. There are a number of ways to go about this, the simplest is to attach additional information to the future object itself.

from concurrent.futures import as_completed
from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

futures=[]
for i in range(3):
    future = session.get('http://httpbin.org/get')
    future.i = i
    futures.append(future)

for future in as_completed(futures):
    resp = future.result()
    pprint({
        'i': future.i,
        'content': resp.json(),
    })

Canceling queued requests (a.k.a cleaning up after yourself)

If you know that you won't be needing any additional responses from futures that haven't yet resolved, it's a good idea to cancel those requests. You can do this by using the session as a context manager:

from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
    future = session.get('https://httpbin.org/get')
    future2 = session.get('https://httpbin.org/delay/10')
    future3 = session.get('https://httpbin.org/delay/10')
    response = future.result()

In this example, the second or third request will be skipped, saving time and resources that would otherwise be wasted.

Iterating over a list of requests responses

Without preserving the requests order:

from concurrent.futures import as_completed
from requests_futures.sessions import FuturesSession
with FuturesSession() as session:
    futures = [session.get('https://httpbin.org/delay/{}'.format(i % 3)) for i in range(10)]
    for future in as_completed(futures):
        resp = future.result()
        print(resp.json()['url'])

Working in the Background

Additional processing can be done in the background using requests's hooks functionality. This can be useful for shifting work out of the foreground, for a simple example take json parsing.

from pprint import pprint
from requests_futures.sessions import FuturesSession

session = FuturesSession()

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

future = session.get('http://httpbin.org/get', hooks={
    'response': response_hook,
})
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)

Hooks can also be applied to the session.

from pprint import pprint
from requests_futures.sessions import FuturesSession

def response_hook(resp, *args, **kwargs):
    # parse the json storing the result on the response object
    resp.data = resp.json()

session = FuturesSession()
session.hooks['response'] = response_hook

future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
# data will have been attached to the response object in the background
pprint(response.data)   pprint(response.data)

A more advanced example that adds an elapsed property to all requests.

from pprint import pprint
from requests_futures.sessions import FuturesSession
from time import time


class ElapsedFuturesSession(FuturesSession):

    def request(self, method, url, hooks=None, *args, **kwargs):
        start = time()
        if hooks is None:
            hooks = {}

        def timing(r, *args, **kwargs):
            r.elapsed = time() - start

        try:
            if isinstance(hooks['response'], (list, tuple)):
                # needs to be first so we don't time other hooks execution
                hooks['response'].insert(0, timing)
            else:
                hooks['response'] = [timing, hooks['response']]
        except KeyError:
            hooks['response'] = timing

        return super(ElapsedFuturesSession, self) \
            .request(method, url, hooks=hooks, *args, **kwargs)



session = ElapsedFuturesSession()
future = session.get('http://httpbin.org/get')
# do some other stuff, send some more requests while this one works
response = future.result()
print('response status {0}'.format(response.status_code))
print('response elapsed {0}'.format(response.elapsed))

Using ProcessPoolExecutor

Similarly to ThreadPoolExecutor, it is possible to use an instance of ProcessPoolExecutor. As the name suggest, the requests will be executed concurrently in separate processes rather than threads.

from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before

Hint

Using the ProcessPoolExecutor is useful, in cases where memory usage per request is very high (large response) and cycling the interpreter is required to release memory back to OS.

A base requirement of using ProcessPoolExecutor is that the Session.request, FutureSession all be pickle-able.

This means that only Python 3.5 is fully supported, while Python versions 3.4 and above REQUIRE an existing requests.Session instance to be passed when initializing FutureSession. Python 2.X and < 3.4 are currently not supported.

# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
# ... use as before

In case pickling fails, an exception is raised pointing to this documentation.

# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession

session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
                         session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor

Important

  • Python >= 3.4 required
  • A session instance is required when using Python < 3.5
  • If sub-classing FuturesSession it must be importable (module global)

Installation

pip install requests-futures
Comments
  • How would a

    How would a "retrying" decorator work on a FutureSession?!

    scope

    Using FutureSession works pretty well for me. But due to the parallel requests, I run into a typical status code 429 "Too Many Requests".

    I've handled this before with the "retrying" library.

    simplified synchronous code example (w/o FutureSession)

    from retrying import retry
    import requests
    
    def retry_if_result(response, retry_status_codes=[]):
        """Return True if we should retry (in this case when the status_code is 429), False otherwise"""
        # DEBUG to see this in action
        if response.status_code in retry_status_codes:
            print('RETRY %d: %s' % (response.status_code, response.url))
            return True
        else:
            return False
    
    
    # https://github.com/rholder/retrying/issues/26
    def never_retry_on_exception(response):
        """Return always False to raise on Exception (will not happen by default!)"""
        return False
    
    # https://github.com/rholder/retrying/issues/25
    def create_retry_decorator(retry_status_codes=[]):
        return retry(
            # create specific "retry_if_result" functions per status codes
            retry_on_result=partial(retry_if_result, retry_status_codes=retry_status_codes), 
            wait_exponential_multiplier=1000, 
            wait_exponential_max=10000,
            retry_on_exception=never_retry_on_exception
            )
    
    # create specific decorators per status codes
    retry429_decorator = create_retry_decorator(retry_status_codes=[429])
    
    s = requests.session()
    
    s.auth = (user, password)
    
    # decorate them with the retry / throttling logic
    s.get    = retry429_decorator(s.get)
    s.put    = retry429_decorator(s.put)
    s.post   = retry429_decorator(s.post)
    s.delete = retry429_decorator(s.delete)
    

    issue

    With switching to

    from requests_futures.sessions import FuturesSession
    
    s = FuturesSession()
    ...
    # decorate them with the retry / throttling logic
    s.get    = retry429_decorator(s.get)
    s.put    = retry429_decorator(s.put)
    s.post   = retry429_decorator(s.post)
    s.delete = retry429_decorator(s.delete)
    
    # non-blocking approach
    future = s.post('https://api.cxense.com/traffic', data=json.dumps(payload))
    

    This runs in an obvious chaos, because the "retry decorator of the post()" wants to inspect the response.status_code, which is obviously not (yet) available in the defered Future object:

        231     # DEBUG to see this in action
    --> 232     if response.status_code in retry_status_codes:
        233         print('RETRY %d: %s' % (response.status_code, response.url))
        234         return True
    AttributeError: 'Future' object has no attribute 'status_code'
    

    question

    What is the pattern to apply a "retry" logic based on the status_code inspection for this defered / async approach? Is this something which can be applied with the background callback?

    It would be awesome, if someone can at least point me into the right direction. At the moment my brain is in "async flow deadlock" once again...

    (or do you think, that this question is better suited for stackoverflow?)

    Stale 
    opened by spex66 13
  • Set connection pool size equal to max_workers if needed

    Set connection pool size equal to max_workers if needed

    When making more than 10 requests (and max_workers set accordingly) at once, I got the following warning:

    HttpConnectionPool is full, discarding connection: <hostname>
    

    This patch adjusts the size of the HTTP connection pool in FuturesSession to max_workers or executor._max_workers if needed.

    opened by mkai 8
  • 'FuturesSession' object has no attribute 'session' after copy/pickle

    'FuturesSession' object has no attribute 'session' after copy/pickle

    Hi. First off: love your library, makes my life so much easier!

    I just ran into this error after passing a FuturesSession through a multiprocessing.Pool. This is effectively just pickling + unpickling = copying the FuturesSession which I can reproduce with these minimal examples:

    >>> from copy import copy
    >>> from requests_futures.sessions import FuturesSession
    >>> s = FuturesSession(max_workers=3)
    >>> _ = s.get('http://httpbin.org/get')  # works 
    >>> s2 = copy(s)
    >>> _ = s2.get('http://httpbin.org/get') 
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    <ipython-input-12-2aa259e9dd0a> in <module>
    ----> 1 _ = s2.get('http://httpbin.org/get')
    
    ~/.pyenv/versions/3.7.5/lib/python3.7/site-packages/requests_futures/sessions.py in get(self, url, **kwargs)
        118         :rtype : concurrent.futures.Future
        119         """
    --> 120         return super(FuturesSession, self).get(url, **kwargs)
        121 
        122     def options(self, url, **kwargs):
    
    ~/.pyenv/versions/3.7.5/lib/python3.7/site-packages/requests/sessions.py in get(self, url, **kwargs)
        541 
        542         kwargs.setdefault('allow_redirects', True)
    --> 543         return self.request('GET', url, **kwargs)
        544 
        545     def options(self, url, **kwargs):
    
    ~/.pyenv/versions/3.7.5/lib/python3.7/site-packages/requests_futures/sessions.py in request(self, *args, **kwargs)
         83         :rtype : concurrent.futures.Future
         84         """
    ---> 85         if self.session:
         86             func = self.session.request
         87         else:
    
    AttributeError: 'FuturesSession' object has no attribute 'session'
    

    For completeness, here's an example with pickling

    >>> import pickle
    >>> from requests_futures.sessions import FuturesSession
    >>> s = FuturesSession(max_workers=3)
    >>> s2 = pickle.loads(pickle.dumps(s))
    >>> _ = s2.get('http://httpbin.org/get') 
    

    Basically the .session doesn't make it into the copy. I can't quite tell whether anything changed but I swear this used to work in the past. Any ideas?

    This is with requests=2.24.0 and requests-futures=1.0.0

    Stale 
    opened by schlegelp 7
  • max_workers default value is too low

    max_workers default value is too low

    The default max_workers value is 2, which seems extremely low to be a default. The concurrent.futures library itself has much higher defaults for ThreadPoolExecutors at 5 * CPU cores (e.g. a 4 core machine would have 20 threads).

    I've seen some libraries that use requests-futures naively using the default. I'd like to suggest one of the following:

    1. Increase the default to something more beneficial, e.g. 5
    2. Use the standard concurrent.futures default value (my preferred solution)

    The only problem with option 2 is that in Python 3.3 and 3.4 max_workers had no default value for ThreadPoolExecutors and had to be specified. This is easy enough to work around by implementing the same method concurrent.futures itself introduced in Python 3.5:

    import os
    
    class FuturesSession(Session):
    
        def __init__(self, executor=None, max_workers=None, session=None,
                     adapter_kwargs=None, *args, **kwargs):
            # ...
            if max_workers is None:
                max_workers = (os.cpu_count() or 1) * 5
    

    Would be happy to open a PR for this, but before doing so wanted to open this in case there's a hard requirement for this low default value I'm not aware of.

    opened by dchevell 7
  • Unresolved attribute reference 'result' for class 'Response'

    Unresolved attribute reference 'result' for class 'Response'

    Just a boring thing. Copy and paste the first example in Pycharm 2018.2.4. It run but you get a warning inspection with:

    Unresolved attribute reference 'result' for class 'Response' less... (Ctrl+F1) Inspection info: This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items

    How to fix it? Just some import I didn't find. From where is this result function?

    opened by rodrigozanatta 7
  • Pickling error when using background_callback with process pool executor

    Pickling error when using background_callback with process pool executor

    Background callbacks work fine and process pool executor works fine, but when used together I get the following error. I have now run into this issue both on macOS (python 3.6) and ubuntu (python 3.5).

    Traceback (most recent call last):
      File "/usr/local/Cellar/python/3.6.5_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
        obj = _ForkingPickler.dumps(obj)
      File "/usr/local/Cellar/python/3.6.5_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
        cls(buf, protocol).dump(obj)
    AttributeError: Can't pickle local object 'FuturesSession.request.<locals>.wrap'
    

    Below is a reproducible example:

    from concurrent.futures import ProcessPoolExecutor
    from requests_futures.sessions import FuturesSession
    from requests import Session
    
    def bg_cb(sess, resp):
        print('background callback successful')
    
    session = FuturesSession(executor=ProcessPoolExecutor(max_workers=2),
                             session=Session())
    
    future = session.get('https://www.google.com', background_callback=bg_cb)
    
    print(future.result())
    
    Stale 
    opened by TylerADavis 7
  • ThreadPoolExecutor resource cleanup?

    ThreadPoolExecutor resource cleanup?

    When using FuturesSession for a long-running web scraper script, I've noticed a memory leak due to the fact that I wasn't cleaning up the ThreadPoolExecutors that were created by the many FuturesSession(max_workers=blah) calls I was making.

    I fixed the issue by writing a contextmanager that cleaned up my executor when exiting:

    @contextmanager
    def clean_futures_session_when_done(session):
        try:
            yield
        finally:
            if session.executor:
                session.executor.shutdown()
    
    with clean_futures_session_when_done(FuturesSession(max_workers=2)):
        do_stuff()
    

    This feels a bit slimy since I'm using the internal(?) self.executor reference. I also realize that the shutdown() will block until all Futures are done, but I feel this is acceptable for many use cases.

    An alternative I've considered is having FuturesSession implement the context manager protocol with __enter__() and __exit__() so we can directly use it in a with statement. This would be similar to how open() works:

    class FuturesSessionWithCleanup(FuturesSession):
        def __enter__(self):
            return self
    
        def __exit__(self, type, value, traceback):
            self.executor.shutdown()
    
    with FuturesSessionWithCleanup(max_workers=2):
        do_stuff()
    # block until all Futures are cleaned
    

    Does this sound reasonable?

    opened by boboli 7
  • KeyboardInterrupt doesn't stop program, it just sit there half dead

    KeyboardInterrupt doesn't stop program, it just sit there half dead

    I have the usual try/except for KeyboardInterrupt around my main which raises SystemExit. But the program doesn't die. While searching the web I see various solutions for Python Threads but this is not exposed by this library. What is the best practice for handling signals in Requests_Futures?

    opened by shaleh 7
  • Add args to background callbacks

    Add args to background callbacks

    Background callback args are fixed for now, but I need pass some additional information to the callback in my code.

    I suggest this request function in order to allow callbacks with arguments passed by the caller code.

        def request(self, *args, **kwargs):
            """Maintains the existing api for Session.request.
    
            Used by all of the higher level methods, e.g. Session.get.
    
            The background_callback param allows you to do some processing on the
            response in the background, e.g. call resp.json() so that json parsing
            happens in the background thread.
            """
            func = sup = super(FuturesSession, self).request
    
            background_callback = kwargs.pop('background_callback', None)
            background_callback_args = kwargs.pop('background_callback_args', None)
            if background_callback:
                def wrap(*args_, **kwargs_):
                    resp = sup(*args_, **kwargs_)
                    if background_callback_args:
                        background_callback(self, resp, *background_callback_args)
                    else:
                        background_callback(self, resp)
                    return resp
    
                func = wrap
    
            return self.executor.submit(func, *args, **kwargs)
    

    Example usage:

    id = ...
    future = session.get(url, background_callback=fun_cb, background_callback_args=(id,))
    
    opened by earada 7
  • Response for batched requests

    Response for batched requests

    I am sending n post requests to the server. The server has the capability to batch requests . So, the server batches the n requests, combines them into a single request of payload n, processes it and returns a single response of payload n.

    I am noticing that, on the server side, I see the payload of n being sent, but on the client side, when I print response.result().content, I see only 1 payload.

    Is this a supported scenario. Since we only get 1 response for n requests, how is the response handled?

    opened by agunapal 6
  • AttributeError: (

    AttributeError: ("'FuturesSession' object has no attribute 'session'", 'occurred at index 0')

    I want to use the ProcessPoolExecutor, but got the following error:

    AttributeError: ("'FuturesSession' object has no attribute 'session'", 'occurred at index 0')
    

    my code like this:

    from requests_futures.sessions import FuturesSession
    from concurrent.futures import ProcessPoolExecutor
    
    session = FuturesSession(executor=ProcessPoolExecutor(max_workers=5))
    url = "xxxx"
    params = "xxxx"
    f = session.post(url, data=params)
    print(f.result())
    

    my environment is macOS 10.14 and python 3.7

    opened by luqinghui 6
  • Not showing as updated on PyPI

    Not showing as updated on PyPI

    Currently, PyPI reports that this project was last updated on Jun 11, 2019 because that's when the source was last updated. https://pypi.org/project/requests-futures/#history

    Can you update the source distribution to match the wheel so that people know this project is actively maintained?

    A new release with the updated requirements.txt would be nice too.

    opened by JonnoFTW 0
Owner
Ross McFarland
Ross McFarland
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.1k Jan 1, 2023
Python requests like API built on top of Twisted's HTTP client.

treq: High-level Twisted HTTP Client API treq is an HTTP library inspired by requests but written on top of Twisted's Agents. It provides a simple, hi

Twisted Matrix Labs 553 Dec 18, 2022
🔄 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

?? ?? Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

Hackers and Slackers 15 Dec 12, 2022
EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, parallel or even single requests

EasyRequests EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, paralle

Avi 1 Jan 27, 2022
Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Paweł Piotr Przeradowski 8.6k Jan 4, 2023
A modern/fast python SOAP client based on lxml / requests

Zeep: Python SOAP client A fast and modern Python SOAP client Highlights: Compatible with Python 3.6, 3.7, 3.8 and PyPy Build on top of lxml and reque

Michael van Tellingen 1.7k Jan 1, 2023
A toolbelt of useful classes and functions to be used with python-requests

The Requests Toolbelt This is just a collection of utilities for python-requests, but don't really belong in requests proper. The minimum tested reque

null 892 Jan 6, 2023
Single-file replacement for python-requests

mureq mureq is a single-file, zero-dependency replacement for python-requests, intended to be vendored in-tree by Linux systems software and other lig

Shivaram Lingamneni 267 Dec 28, 2022
r - a small subset of Python Requests

r a small subset of Python Requests a few years ago, when I was first learning Python and looking for http functionality, i found the batteries-includ

Gabriel Sroka 4 Dec 15, 2022
Requests + Gevent = <3

GRequests: Asynchronous Requests GRequests allows you to use Requests with Gevent to make asynchronous HTTP Requests easily. Note: You should probably

Spencer Phillip Young 4.2k Dec 30, 2022
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once

pathprober Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once. Purpose Brute-forcing webs

NFA 41 Jul 6, 2022
Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

null 7 Nov 5, 2022
Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Introduction httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries. HTTP and HTTPS HTTP

null 457 Dec 10, 2022
A next generation HTTP client for Python. 🦋

HTTPX - A next-generation HTTP client for Python. HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support

Encode 9.8k Jan 5, 2023
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Dec 29, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Jan 2, 2023
HTTP/2 for Python.

Hyper: HTTP/2 Client for Python This project is no longer maintained! Please use an alternative, such as HTTPX or others. We will not publish further

Hyper 1k Dec 23, 2022
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (>=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Benoit Chesneau 334 Dec 24, 2022
Aiosonic - lightweight Python asyncio http client

aiosonic - lightweight Python asyncio http client Very fast, lightweight Python asyncio http client Here is some documentation. There is a performance

Johanderson Mogollon 93 Jan 6, 2023