A next generation HTTP client for Python. πŸ¦‹

Overview

HTTPX

HTTPX - A next-generation HTTP client for Python.

Test Suite Package version

HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2.

Note: HTTPX should be considered in beta. We believe we've got the public API to a stable point now, but would strongly recommend pinning your dependencies to the 0.16.* release, so that you're able to properly review API changes between package updates. A 1.0 release is expected to be issued sometime in 2021.


Let's get started...

>>> import httpx
>>> r = httpx.get('https://www.example.org/')
>>> r
<Response [200 OK]>
>>> r.status_code
200
>>> r.headers['content-type']
'text/html; charset=UTF-8'
>>> r.text
'<!doctype html>\n<html>\n<head>\n<title>Example Domain</title>...'

Or, using the async API...

Use IPython or Python 3.8+ with python -m asyncio to try this code interactively.

>>> import httpx
>>> async with httpx.AsyncClient() as client:
...     r = await client.get('https://www.example.org/')
...
>>> r
<Response [200 OK]>

Features

HTTPX builds on the well-established usability of requests, and gives you:

Plus all the standard features of requests...

  • International Domains and URLs
  • Keep-Alive & Connection Pooling
  • Sessions with Cookie Persistence
  • Browser-style SSL Verification
  • Basic/Digest Authentication
  • Elegant Key/Value Cookies
  • Automatic Decompression
  • Automatic Content Decoding
  • Unicode Response Bodies
  • Multipart File Uploads
  • HTTP(S) Proxy Support
  • Connection Timeouts
  • Streaming Downloads
  • .netrc Support
  • Chunked Requests

Installation

Install with pip:

$ pip install httpx

Or, to include the optional HTTP/2 support, use:

$ pip install httpx[http2]

HTTPX requires Python 3.6+.

Documentation

Project documentation is available at https://www.python-httpx.org/.

For a run-through of all the basics, head over to the QuickStart.

For more advanced topics, see the Advanced Usage section, the async support section, or the HTTP/2 section.

The Developer Interface provides a comprehensive API reference.

To find out about tools that integrate with HTTPX, see Third Party Packages.

Contribute

If you want to contribute with HTTPX check out the Contributing Guide to learn how to start.

Dependencies

The HTTPX project relies on these excellent libraries:

  • httpcore - The underlying transport implementation for httpx.
    • h11 - HTTP/1.1 support.
    • h2 - HTTP/2 support. (Optional)
  • certifi - SSL certificates.
  • rfc3986 - URL parsing & normalization.
    • idna - Internationalized domain name support.
  • sniffio - Async library autodetection.
  • brotlipy - Decoding for "brotli" compressed responses. (Optional)

A huge amount of credit is due to requests for the API layout that much of this work follows, as well as to urllib3 for plenty of design inspiration around the lower-level networking details.

β€” ⭐️ β€”

HTTPX is BSD licensed code. Designed & built in Brighton, England.

Comments
  • h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE

    h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE

    I intermittently got this error when load testing uvicorn endpoint.

    This error comes from a proxy endpoint where I am also using encode/http3 to perform HTTP client calls.

      File "/project/venv/lib/python3.7/site-packages/http3/client.py", line 365, in post
        timeout=timeout,
      File "/project/venv/lib/python3.7/site-packages/http3/client.py", line 497, in request
        timeout=timeout,
      File "/project/venv/lib/python3.7/site-packages/http3/client.py", line 112, in send
        allow_redirects=allow_redirects,
      File "/project/venv/lib/python3.7/site-packages/http3/client.py", line 145, in send_handling_redirects
        request, verify=verify, cert=cert, timeout=timeout
      File "/project/venv/lib/python3.7/site-packages/http3/dispatch/connection_pool.py", line 121, in send
        raise exc
      File "/project/venv/lib/python3.7/site-packages/http3/dispatch/connection_pool.py", line 116, in send
        request, verify=verify, cert=cert, timeout=timeout
      File "/project/venv/lib/python3.7/site-packages/http3/dispatch/connection.py", line 59, in send
        response = await self.h11_connection.send(request, timeout=timeout)
      File "/project/venv/lib/python3.7/site-packages/http3/dispatch/http11.py", line 65, in send
        event = await self._receive_event(timeout)
      File "/project/venv/lib/python3.7/site-packages/http3/dispatch/http11.py", line 109, in _receive_event
        event = self.h11_state.next_event()
      File "/project/venv/lib/python3.7/site-packages/h11/_connection.py", line 439, in next_event
        exc._reraise_as_remote_protocol_error()
      File "/project/venv/lib/python3.7/site-packages/h11/_util.py", line 72, in _reraise_as_remote_protocol_error
        raise self
      File "/project/venv/lib/python3.7/site-packages/h11/_connection.py", line 422, in next_event
        self._process_event(self.their_role, event)
      File "/project/venv/lib/python3.7/site-packages/h11/_connection.py", line 238, in _process_event
        self._cstate.process_event(role, type(event), server_switch_event)
      File "/project/venv/lib/python3.7/site-packages/h11/_state.py", line 238, in process_event
        self._fire_event_triggered_transitions(role, event_type)
      File "/project/venv/lib/python3.7/site-packages/h11/_state.py", line 253, in _fire_event_triggered_transitions
        .format(event_type.__name__, role, self.states[role]))
    h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
    
    bug discussion external 
    opened by didip 108
  • Discussion about dropping sync support.

    Discussion about dropping sync support.

    I'm opening this issue so that we can have a discussion about something a bit radical. πŸ˜‡

    Right now httpx support standard threaded concurrency, plus asyncio and trio.

    I think that it may be in the projects best interests to drop threaded concurrency support completely, and focus exclusively on providing a kick-ass async HTTP client.

    😨 What? Why would we do that?

    The big design goals of HTTPX have been to meet two features lacking in requests...

    • HTTP/2 (and eventually HTTP/3) support.
    • Async support.

    Which is great, but here's the thing... the primary motivation for HTTP/2 over HTTP/1.1 is it's ability to handle large numbers of concurrent requests. Which also really means that you should probably only care about HTTP/2 support if you're working in an async context.

    For users working with the standard threaded concurrency, HTTP/2 is a shiny new headline feature, that plenty of folks will want to jump at, that isn't actully providing them with a substantial benefit.

    Given that requests already provides a battle tested HTTP/1.1 client for the threaded concurrency masses, my inclination is that rather than trying to meet all possible use cases we should focus on httpx being a "the right tool for the right job" rather than "one size fits all".

    Here are the benefits we'd gain from dropping sync support:

    • We no longer need awkward subclasses such as BaseRequest, BaseResponse that are cluttering up our API surface area.
    • We no longer need differing sets of type annotations for async vs. sync cases, such as async iterators vs. iterators for streaming content.
    • We no longer need awkward bridging code in our concurrency backends.
    • We can focus exclusively on providing great support for asyncio and trio.
    • Our documentation can focus on the async case, rather than "Here's this API, and by the way there's two different variants".
    • We can explain use-case motivations for choosing HTTPX much more clearly. (API gateways, backing onto HTTP/2 services, with the ability to easily service 1000's of concurrent requests. Web spidering tools with wonderful performance. etc.)

    It's absolutely more of a niche (right now) than just aiming at being a requests alternative, but it's one that I'm personally far more invested in. It seems to me that we may as well embrace the split between the sync and async concurrency models, and build something that excels in one particular case, rather than trying to plaster over the differences.

    So, tentatively (hopefully)... what do folks think?

    concurrency 
    opened by tomchristie 43
  • Supporting Sync. Done right.

    Supporting Sync. Done right.

    So, with the 0.8 release we dropped our sync interface completely.

    The approach of bridging a sync interface onto an underlying async one, had some issues, in particular:

    • Have Sync and Async variant classes both subclassing Base classes was leading to poorer, less comprehensible code.
    • There were niggly issues, such as being unable to run in some environments where top-level async support was provided. (Eg. failing under Jupyter)

    The good news is that I think the unasync approach that the hip team are working with is actually a much better tack onto this, and is something that I think httpx can adopt.

    We'll need to decide:

    • How we name/scope our sync vs. async variants.
    • If we provide the top-level API for sync+async, or just sync.

    (Essentially the same discussion as https://github.com/python-trio/hip/issues/168)

    One option here could be...

    • httpx.get(), httpx.post() and pals. Sync only - they're just a nice convenience. Good for the repl and casual scripting, but no good reason why async code shouldn't use a proper client.
    • httpx.sync.Client, httpx.async.Client and pals. Keep the names the same in each sync vs async case, but just scope them into two different submodules. Our __repr__'s for these clases could make sure to be distinct, eg. <async.Response [200 OK]>

    So eg...

    Sync:

    client = httpx.sync.Client()
    client.get(...)
    

    Async:

    client = httpx.async.Client()
    await client.get(...)
    

    Update

    Here's how the naming/scoping could look...

    Top level:

    response = httpx.get(...)
    response = httpx.post(...)
    response = httpx.request(...)
    with httpx.stream(...) as response:
        ...
    
    response = await httpx.aget(...)
    response = await httpx.apost(...)
    response = await httpx.arequest(...)
    async with httpx.astream(...) as response:
        ...
    

    Client usage:

    client = httpx.SyncClient()
    
    client = httpx.AsyncClient()
    

    (Unlike our previous approach there's no Client case there, and no subclassing. Just a pure async implementation, and a pure sync implementation.)

    We can just have a single Request class, that'd accept whatever types on init and expose both streaming and async streaming interfaces to be used by the dispatcher, that raises errors in invalid cases eg. the sync streaming case is called, but it was passed an async iterable for body data.

    Most operations would return a plain ol' Response class, with no I/O available on it. See https://github.com/encode/httpx/issues/588#issuecomment-562057229 When using .stream, either an AsyncResponse or a SyncResponse would be returned.

    There'd be sync and async variants of the dispatcher interface and implementations, but that wouldn't be part of the 1.0 API spec. Neither would the backend API.

    enhancement 
    opened by tomchristie 36
  • Retry requests

    Retry requests

    urllib3 has a very convenient Retry utility (docs) that I have found to be quite useful when dealing with flaky APIs. http3's Clients don't support this sort of thing yet, but I would love it if they did!

    In the meantime, I can probably work out my own with a while loop checking the response code.

    opened by StephenBrown2 34
  • Add trio concurrency backend

    Add trio concurrency backend

    Fixes #120

    Current status: we're blocked by the connection pool implicitly relying on asyncio polling sockets in the background to tell if they're still readable. See discussion around https://github.com/encode/httpx/pull/276#discussion_r322002604.

    Most of the notes below are outdated.


    Getting the tests to pass on this one ended up having to tackle more tricky issues than I initially thought. (There's still start_tls() to figure out, but that will be for a future PR.)

    So, a few notes, from the most straight-forward to the most obscure:

    • I took the liberty to refactor HTTP2Dispatcher.initiate_connection() into an async function; it allowed to remove the need for write_no_block() on the concurrency backend (which trio does not have an obvious equivalent for). Things seem to be okay, but if I missed any reasons why it was not async in the first place, let me know! cc @tomchristie
    • I had to make connections context-managed in HTTPS tests in test_connections.py β€” otherwise they'd stall when using Trio, probably because the underlying TCP stream would be waiting to be closed. I'm not 100% sure I understand the origin of this, though. Anyway, I ended up converting all those tests to use async with for consistency.
    • For async tests, I had to figure out a way to allow an alternative I/O library (trio) to coexist with asyncio. The solution I went for is spawning backend.run() in the asyncio threadpool. If the code needs better/more comments there, let me know.
    • For the connection pool tests, I had to figure out how to restart the uvicorn server from an async test that could be running on something else than asyncio. I added a detailed comment in the code about the solution I've come up with β€” let me know if it's clear enough.

    If it can help with reviewing, I'd be happy to pytest.skip() some tests in this PR and then submit solutions for the tricky parts in separate PRs that resolve those skips. Let me know how you'd like to handle this. :-)

    opened by florimondmanca 32
  • Memory leak when creating lots of AsyncClient contexts

    Memory leak when creating lots of AsyncClient contexts

    Checklist

    • [x] The bug is reproducible against the latest release and/or master.
    • [x] There are no similar issues or pull requests to fix it yet.

    Describe the bug

    After creating an AsyncClient context (with async with), it does not seem to be garbage collected, that can be a problem for very long running services that might create a bunch of them and eventually run out of memory

    To reproduce

    import httpx
    import gc
    import asyncio
    
    print(f"httpx version: {httpx.__version__}")
    
    
    async def make_async_client():
        async with httpx.AsyncClient() as client:
            await asyncio.sleep(10)
    
    
    async def main(n):
        tasks = []
        for _ in range(n):
            tasks.append(make_async_client())
        print(f"Creating {n} contexts, sleeping 10 secs")
        await asyncio.wait(tasks)
    
    
    asyncio.run(main(2000))
    print("Finished run, still using lots of memory")
    gc.collect()
    input("gc.collect() does not help :(")
    

    Comparison with aiohttp

    import aiohttp
    import asyncio
    
    print(f"aiohttp version {aiohttp.__version__}")
    
    
    async def make_async_client():
        async with aiohttp.ClientSession() as client:
            await asyncio.sleep(10)
    
    
    async def main(n):
        tasks = []
        for _ in range(n):
            tasks.append(make_async_client())
        print(f"Creating {n} contexts and sleeping")
        await asyncio.wait(tasks)
    
    
    asyncio.run(main(200000))
    input("Finished run, all memory is freed")
    

    Expected behavior

    Memory gets freed, after exiting the async context, like for aiohttp

    Actual behavior

    Memory does not get freed, even after explicitly calling gc.collect()

    Debugging material

    Environment

    • OS: Linux (many versions)
    • Python version: 3.8.3
    • HTTPX version: both 0.12.1 and master
    • Async environment: both asyncio and trio
    • HTTP proxy: no
    • Custom certificates: no

    Additional context

    I understand typically you need to have only one async ClientSession, but it shouldn't leak memory anyway, for very long running processes it can be a problem

    Thanks for this great library! If you're interested I can try to debug this issue and send a PR

    opened by Recursing 29
  • Connect Timeout confusion

    Connect Timeout confusion

    Hi!

    I'm quite confused about Connect Timeout. I observed that I have some applications that happens to throw a ConnectionTimeout exception in httpx/backends/asyncio.py, in this part (lines 201, 202):

    except asyncio.TimeoutError:
    	raise ConnectTimeout()
    

    What I think is strange is that, my applications are monitored with NewRelic and Microsoft Application Insights, and although they say that a request took something like 5s, I can find the request in the origin and it took just some milliseconds.

    So I think my question here is this: is there something in the origin application that can cause this timeout? Maybe some blocking code? Or wrong AsyncClient creation?

    Any help would be appreciated, thanks!

    Obs: my applications are developed using FastAPI

    question 
    opened by victoraugustolls 29
  • Implement HTTP proxies and config on Client

    Implement HTTP proxies and config on Client

    I've been able to successfully use this dispatcher against a public HTTP forwarding proxy and it works properly. :)

    Need to still figure out how I'm going to test this.

    opened by sethmlarson 28
  • HTTP/2 higher number of timeouts compared with 1.1

    HTTP/2 higher number of timeouts compared with 1.1

    I recently observed that the number of timeouts with http2 enabled on Async client are higher than with it disabled. Made a gist with an example:

    https://gist.github.com/victoraugustolls/01002ae218366b453e962446c9c8a274

    bug http/2 
    opened by victoraugustolls 25
  • Client.close() can hang. (asyncio only).

    Client.close() can hang. (asyncio only).

    I've been doing some async requests with HTTPX for a while now without any issue.

    Summary

    However, I've stumbled upon today a case I cannot explain. While doing a request to Microsoft Graph OAuth2 access token endpoint, I correctly receive a response but HTTPX client seems to hang and never return. I didn't find any other API with the same behaviour.

    Reproduction

    import asyncio
    
    import httpx
    
    
    async def main():
        async with httpx.Client() as client:
            response = await client.post(
                "https://login.microsoftonline.com/common/oauth2/v2.0/token",
                # You can keep those parameters, the issue happens also for 4XX responses
                data={
                    "grant_type": "authorization_code",
                    "code": "code",
                    "redirect_uri": "http://localhost:8000/redirect",
                    "client_id": "CLIENT_ID",
                    "client_secret": "CLIENT_SECRET",
                },
            )
    
            print(response.http_version)  # HTTP/1.1
            print(len(response.content))  # 485 (same as in Content-Length header)
            print(response.headers)
        # Never happens
        print("Done")
        return response
    
    
    asyncio.run(main())
    
    Response headers
    Headers([('cache-control', 'no-cache, no-store'), ('pragma', 'no-cache'), ('content-type', 'application/json; charset=utf-8'), ('expires', '-1'), ('strict-transport-security', 'max-age=31536000; includeSubDomains'), ('x-content-type-options', 'nosniff'), ('x-ms-request-id', 'dd57f328-14dd-4a99-8ed4-c81566f10800'), ('x-ms-ests-server', '2.1.9707.19 - AMS2 ProdSlices'), ('p3p', 'CP="DSP CUR OTPi IND OTRi ONL FIN"'), ('set-cookie', 'fpc=AhPq7wxRjzlCr42-XJpjplo; expires=Sun, 12-Jan-2020 06:43:04 GMT; path=/; secure; HttpOnly; SameSite=None'), ('set-cookie', 'x-ms-gateway-slice=prod; path=/; SameSite=None; secure; HttpOnly'), ('set-cookie', 'stsservicecookie=ests; path=/; SameSite=None; secure; HttpOnly'), ('date', 'Fri, 13 Dec 2019 06:43:04 GMT'), ('content-length', '485')])
    
    After waiting a while, a `ConnectionResetError` is raised. Here is the stacktrace
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/client.py", line 884, in __aexit__
        await self.close()
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/client.py", line 873, in close
        await self.dispatch.close()
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/dispatch/connection_pool.py", line 212, in close
        await connection.close()
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/dispatch/connection.py", line 171, in close
        await self.open_connection.close()
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/dispatch/http11.py", line 72, in close
        await self.socket.close()
      File "/Users/fvoron/.local/share/virtualenvs/httpx-oauth-5xgbpDkq/src/httpx/httpx/concurrency/asyncio.py", line 167, in close
        await self.stream_writer.wait_closed()
      File "/Users/fvoron/.pyenv/versions/3.7.5/lib/python3.7/asyncio/streams.py", line 323, in wait_closed
        await self._protocol._closed
      File "/Users/fvoron/.pyenv/versions/3.7.5/lib/python3.7/asyncio/selector_events.py", line 804, in _read_ready__data_received
        data = self._sock.recv(self.max_size)
    ConnectionResetError: [Errno 54] Connection reset by peer
    

    Versions

    • Python 3.7
    • HTTPX 0.9.3
    bug interop http/1.1 
    opened by frankie567 25
  • Digest auth middleware

    Digest auth middleware

    This PR implements Digest authentication as a middleware.

    Digest is mentioned as pending work in the README.

    Tested against httpbin.org's digest auth endpoint and mimicking request's implementation.

    Note this PR does not include handling the auth-int Quality of Protection (qop) option in the RFC mainly because requests does not do it either. I'm happy to add it in a separate PR if it is deemed necessary though.

    opened by yeraydiazdiaz 25
  • Multipart Form Data Headers

    Multipart Form Data Headers

    After testing #2382 it seems that the request headers are not being updated correctly.

    filename = 'test.tar'
    data = {"file": (Path(filename).name, open(filename, "rb"), "application/x-tar")}
    resp = httpx.request("POST", 'URL', files=data)
    

    The request has the following header {'Content-Type': 'multipart/form-data; boundary=b194f2c9bc744ebf40c3fa03d6d53987'}

    However if using an initialized client:

    client = httpx.Client(headers={"Content-Type": "application/json"})
    filename = 'test.tar'
    data = {"file": (Path(filename).name, open(filename, "rb"), "application/x-tar")}
    resp = client.post('URL', files=data)
    

    The request is still using {'Content-Type': 'application/json'} Error message from our API: '{"code":"API_MALFORMED_BODY","message":"Malformed JSON"}'

    Shouldn't the POST change the headers if using multipart forms?

    Specifying the following also does not work because the boundaries are not included in the headers: resp = client.post('URL', files=data, headers={'Content-Type': 'multipart/form-data'})

    discussion 
    opened by justinjeffery-ipf 1
  • Use percent encoding for spaces in query parameters

    Use percent encoding for spaces in query parameters

    httpx currently encodes spaces in the query parameter using the + sign:

    >>> httpx.URL("https://www.example.com", params={"a": "b c"})
    URL('https://www.example.com?a=b+c')
    

    This causes problems for me described in the linked discussion. Tom's comment:

    However using https://www.example.com/?a=b c in the address bar in various browsers...

    Chrome, Safari, and Firefox all switch this out to https://www.example.com/?a=b%20c. (Firefox displays "https://www.example.com/?a=b c" but I can see from the "page info" the actual URL it requested.)

    So... we could probably switch to %20, and it might be a marginally better choice. (?)


    • [x] Initially raised as discussion #2460
    user-experience 
    opened by chbndrhnns 0
  • Add `NetRCAuth()` class.

    Add `NetRCAuth()` class.

    Prompted by https://github.com/encode/httpx/issues/2532#issuecomment-1368907498

    We currently automatically handle netrc authentication, which is baked directly into the Client, and which is enabled unless the developer sets trust_env=False.

    • Browsers, which do not use netrc authentication.
    • The curl command line client, which only uses netrc authentication if explicitly enabled.

    The suggest here is that we should make a behavioural change, and no longer bake netrc authentication directly into the client, in favor of providing an explicit NetRCAuth() class.

    Some usage examples...

    # Use default netrc file
    auth = httpx.NetRCAuth()
    client = httpx.Client(auth=auth)
    
    # Use explicit netrc file
    auth = httpx.NetRCAuth(file="/path/to/netrc")
    client = httpx.Client(auth=auth)
    
    # Optional environ override, fallback to default netrc file otherwise
    auth = httpx.NetRCAuth(file=os.environ.get('NETRC'))
    client = httpx.Client(auth=auth)
    
    # Mandatory environ override
    auth = httpx.NetRCAuth(file=os.environ['NETRC'])
    client = httpx.Client(auth=auth)
    
    # Optional netrc auth, depending on if the "~/.netrc" file exists
    try:
        auth = httpx.NetRCAuth()
    except FileNotFound:
        auth = None
    client = httpx.Client(auth=auth)
    

    This would be a behavioral change and need to be part of a 0.24.0 release, but I think it's obviously more consistent and neater than our existing "netrc is a special case" behaviour.

    (I had also assumed that this change would resolve #2088, and added a test case for this, which showed me that my assumption there was incorrect. Failing test case for that now dropped in commits https://github.com/encode/httpx/pull/2535/commits/ebcf77ae4959f0e67c7fce070e7b79ab5b5bad2a and https://github.com/encode/httpx/pull/2535/commits/02120cccecf905a0f142b36a31cebf68974e9528.)

    TODO:

    • [x] Drop baked-in client netrc auth class.
    • [x] Implement NetRCAuth class.
    • [x] Tests.
    • [x] Update documentation.

    Documentation link, for easier review... https://github.com/encode/httpx/blob/02120cccecf905a0f142b36a31cebf68974e9528/docs/advanced.md#netrc-support

    user-experience api change 
    opened by tomchristie 0
  • NetRCInfo.netrc_info() doesn't work when run in a systemd service with DynamicUser=true

    NetRCInfo.netrc_info() doesn't work when run in a systemd service with DynamicUser=true

    Discussed in https://github.com/encode/httpx/discussions/2526

    Originally posted by djmattyg007 January 1, 2023 The function in question, netrc_info(), iterates over three paths. Two of these paths refer to the home directory of the current process' user, by prefixing the paths with ~/. These paths are run through pathlib.Path.expanduser(), which raises an exception if the home directory can't be determined.

    When the code is running in a systemd service with DynamicUser=true, there is no home directory. This means expanduser() will always raise an exception.

    Fortunately, there's a potential override: set the NETRC environment variable to something. As long as it points to a file that exists, and doesn't need to know the current user's home directory, this is an acceptable workaround.

    Unfortunately, there's an is_file() check performed on the Path object before attempting to use it. This means I can't just set NETRC=/dev/null as a simple way of working around this problem, even though netrc.netrc() will happily accept /dev/null.

    Why does any of this matter? Because I'm not using httpx directly. I'm using the library python-telegram-bot, and have no control over how it uses httpx internally.

    To summarise, there are two problems:

    • The call to pathlib.Path.expanduser() can raise an exception that isn't currently being caught
    • Setting NETRC=/dev/null doesn't work as expected, despite it being a perfectly suitable file to use in this context

    I'm happy to raise an issue for this, and also to implement a PR to resolve this.

    bug 
    opened by tomchristie 3
  • Drop private import of 'encode_request' in test_multipart

    Drop private import of 'encode_request' in test_multipart

    Refs #2492

    Use httpx.Request(...) to handle the multipart tests, rather than importing the encode_request function, which is a private implementation detail.

    refactor 
    opened by tomchristie 0
  • Expose transport retries as `connect_retries` in client options

    Expose transport retries as `connect_retries` in client options

    Opening this draft of previously discussed addition of an option to set connect retries on the client.

    Next steps:

    • [ ] Pass type checking in CI
    • [ ] Agree on a testing strategy
    • [ ] Pass tests in CI
    enhancement discussion 
    opened by madkinsz 3
Releases(0.23.3)
  • 0.23.3(Jan 4, 2023)

    0.23.3 (4th Jan, 2023)

    Fixed

    • Version 0.23.2 accidentally included stricter type checking on query parameters. This shouldn've have been included in a minor version bump, and is now reverted. (#2523, #2539)
    Source code(tar.gz)
    Source code(zip)
  • 0.23.2(Jan 2, 2023)

    0.23.2 (2nd Jan, 2023)

    Added

    • Support digest auth nonce counting to avoid multiple auth requests. (#2463)

    Fixed

    • Multipart file uploads where the file length cannot be determine now use chunked transfer encoding, rather than loading the entire file into memory in order to determine the Content-Length. (#2382)
    • Raise TypeError if content is passed a dict-instance. (#2495)
    • Partially revert the API breaking change in 0.23.1, which removed RawURL. We continue to expose a url.raw property which is now a plain named-tuple. This API is still expected to be deprecated, but we will do so with a major version bump. (#2481)
    Source code(tar.gz)
    Source code(zip)
  • 0.23.1(Nov 18, 2022)

    0.23.1

    Added

    • Support for Python 3.11. (#2420)
    • Allow setting an explicit multipart boundary in Content-Type header. (#2278)
    • Allow tuple or list for multipart values, not just list. (#2355)
    • Allow str content for multipart upload files. (#2400)
    • Support connection upgrades. See https://www.encode.io/httpcore/extensions/#upgrade-requests

    Fixed

    • Don't drop empty query parameters. (#2354)

    Removed

    • Drop .read/.aread from SyncByteStream/AsyncByteStream (#2407)
    • Drop RawURL. (#2241)
    Source code(tar.gz)
    Source code(zip)
  • 0.23.0(May 23, 2022)

    0.23.0 (23rd May, 2022)

    Changed

    • Drop support for Python 3.6. (#2097)
    • Use utf-8 as the default character set, instead of falling back to charset-normalizer for auto-detection. To enable automatic character set detection, see the documentation. (#2165)

    Fixed

    • Fix URL.copy_with for some oddly formed URL cases. (#2185)
    • Digest authentication should use case-insensitive comparison for determining which algorithm is being used. (#2204)
    • Fix console markup escaping in command line client. (#1866)
    • When files are used in multipart upload, ensure we always seek to the start of the file. (#2065)
    • Ensure that iter_bytes never yields zero-length chunks. (#2068)
    • Preserve Authorization header for redirects that are to the same origin, but are an http-to-https upgrade. (#2074)
    • When responses have binary output, don't print the output to the console in the command line client. Use output like <16086 bytes of binary data> instead. (#2076)
    • Fix display of --proxies argument in the command line client help. (#2125)
    • Close responses when task cancellations occur during stream reading. (#2156)
    • Fix type error on accessing .request on HTTPError exceptions. (#2158)
    Source code(tar.gz)
    Source code(zip)
  • 0.22.0(Jan 26, 2022)

  • 0.21.3(Jan 6, 2022)

  • 0.21.2(Jan 5, 2022)

  • 0.21.1(Nov 16, 2021)

  • 0.21.0(Nov 15, 2021)

    0.21.0 (15th November, 2021)

    The 0.21.0 release integrates against a newly redesigned httpcore backend.

    Both packages ought to automatically update to the required versions, but if you are seeing any issues, you should ensure that you have httpx==0.21.* and httpcore==0.14.* installed.

    Added

    • The command-line client will now display connection information when -v/--verbose is used.
    • The command-line client will now display server certificate information when -v/--verbose is used.
    • The command-line client is now able to properly detect if the outgoing request should be formatted as HTTP/1.1 or HTTP/2, based on the result of the HTTP/2 negotiation.
    Source code(tar.gz)
    Source code(zip)
  • 0.20.0(Oct 13, 2021)

    0.20.0 (13th October, 2021)

    The 0.20.0 release adds an integrated command-line client, and also includes some design changes. The most notable of these is that redirect responses are no longer automatically followed, unless specifically requested.

    This design decision prioritises a more explicit approach to redirects, in order to avoid code that unintentionally issues multiple requests as a result of misconfigured URLs.

    For example, previously a client configured to send requests to http://api.github.com/ would end up sending every API request twice, as each request would be redirected to https://api.github.com/.

    If you do want auto-redirect behaviour, you can enable this either by configuring the client instance with Client(follow_redirects=True), or on a per-request basis, with .get(..., follow_redirects=True).

    This change is a classic trade-off between convenience and precision, with no "right" answer. See discussion #1785 for more context.

    The other major design change is an update to the Transport API, which is the low-level interface against which requests are sent. Previously this interface used only primitive datastructures, like so...

    (status_code, headers, stream, extensions) = transport.handle_request(method, url, headers, stream, extensions)
    try
        ...
    finally:
        stream.close()
    

    Now the interface is much simpler...

    response = transport.handle_request(request)
    try
        ...
    finally:
        response.close()
    

    Changed

    • The allow_redirects flag is now follow_redirects and defaults to False.
    • The raise_for_status() method will now raise an exception for any responses except those with 2xx status codes. Previously only 4xx and 5xx status codes would result in an exception.
    • The low-level transport API changes to the much simpler response = transport.handle_request(request).
    • The client.send() method no longer accepts a timeout=... argument, but the client.build_request() does. This required by the signature change of the Transport API. The request timeout configuration is now stored on the request instance, as request.extensions['timeout'].

    Added

    • Added the httpx command-line client.
    • Response instances now include .is_informational, .is_success, .is_redirect, .is_client_error, and .is_server_error properties for checking 1xx, 2xx, 3xx, 4xx, and 5xx response types. Note that the behaviour of .is_redirect is slightly different in that it now returns True for all 3xx responses, in order to allow for a consistent set of properties onto the different HTTP status code types. The response.has_redirect_location location may be used to determine responses with properly formed URL redirects.

    Fixed

    • response.iter_bytes() no longer raises a ValueError when called on a response with no content. (Pull #1827)
    • The 'wsgi.error' configuration now defaults to sys.stderr, and is corrected to be a TextIO interface, not a BytesIO interface. Additionally, the WSGITransport now accepts a wsgi_error confguration. (Pull #1828)
    • Follow the WSGI spec by properly closing the iterable returned by the application. (Pull #1830)
    Source code(tar.gz)
    Source code(zip)
  • 1.0.0.beta0(Sep 14, 2021)

    1.0.0.beta0 (14th September 2021)

    The 1.0 pre-release adds an integrated command-line client, and also includes some design changes. The most notable of these is that redirect responses are no longer automatically followed, unless specifically requested.

    This design decision prioritises a more explicit approach to redirects, in order to avoid code that unintentionally issues multiple requests as a result of misconfigured URLs.

    For example, previously a client configured to send requests to http://api.github.com/ would end up sending every API request twice, as each request would be redirected to https://api.github.com/.

    If you do want auto-redirect behaviour, you can enable this either by configuring the client instance with Client(follow_redirects=True), or on a per-request basis, with .get(..., follow_redirects=True).

    This change is a classic trade-off between convenience and precision, with no "right" answer. See discussion #1785 for more context.

    The other major design change is an update to the Transport API, which is the low-level interface against which requests are sent. Previously this interface used only primitive datastructures, like so...

    (status_code, headers, stream, extensions) = transport.handle_request(method, url, headers, stream, extensions)
    try
        ...
    finally:
        stream.close()
    

    Now the interface is much simpler...

    response = transport.handle_request(request)
    try
        ...
    finally:
        response.close()
    

    Changed

    • The allow_redirects flag is now follow_redirects and defaults to False.
    • The raise_for_status() method will now raise an exception for any responses except those with 2xx status codes. Previously only 4xx and 5xx status codes would result in an exception.
    • The low-level transport API changes to the much simpler response = transport.handle_request(request).
    • The client.send() method no longer accepts a timeout=... argument, but the client.build_request() does. This required by the signature change of the Transport API. The request timeout configuration is now stored on the request instance, as request.extensions['timeout'].

    Added

    • Added the httpx command-line client.
    • Response instances now include .is_informational, .is_success, .is_redirect, .is_client_error, and .is_server_error properties for checking 1xx, 2xx, 3xx, 4xx, and 5xx response types. Note that the behaviour of .is_redirect is slightly different in that it now returns True for all 3xx responses, in order to allow for a consistent set of properties onto the different HTTP status code types. The response.has_redirect_location location may be used to determine responses with properly formed URL redirects.

    Fixed

    • response.iter_bytes() no longer raises a ValueError when called on a response with no content. (Pull #1827)
    • The 'wsgi.error' configuration now defaults to sys.stderr, and is corrected to be a TextIO interface, not a BytesIO interface. Additionally, the WSGITransport now accepts a wsgi_error configuration. (Pull #1828)
    • Follow the WSGI spec by properly closing the iterable returned by the application. (Pull #1830)
    Source code(tar.gz)
    Source code(zip)
  • 0.19.0(Aug 19, 2021)

    0.19.0 (19th August, 2021)

    Added

    • Add support for Client(allow_redirects=<bool>). (Pull #1790)
    • Add automatic character set detection, when no charset is included in the response Content-Type header. (Pull #1791)

    Changed

    • Event hooks are now also called for any additional redirect or auth requests/responses. (Pull #1806)
    • Strictly enforce that upload files must be opened in binary mode. (Pull #1736)
    • Strictly enforce that client instances can only be opened and closed once, and cannot be re-opened. (Pull #1800)
    • Drop mode argument from httpx.Proxy(..., mode=...). (Pull #1795)
    Source code(tar.gz)
    Source code(zip)
  • 0.18.2(Jun 17, 2021)

    0.18.2 (17th June, 2021)

    Added

    • Support for Python 3.10. (Pull #1687)
    • Expose httpx.USE_CLIENT_DEFAULT, used as the default to auth and timeout parameters in request methods. (Pull #1634)
    • Support HTTP/2 "prior knowledge", using httpx.Client(http1=False, http2=True). (Pull #1624)

    Fixed

    • Clean up some cases where warnings were being issued. (Pull #1687)
    • Prefer Content-Length over Transfer-Encoding: chunked for content= cases. (Pull #1619)
    Source code(tar.gz)
    Source code(zip)
  • 0.18.1(Apr 29, 2021)

    0.18.1 (29th April, 2021)

    Changed

    • Update brotli support to use the brotlicffi package (Pull #1605)
    • Ensure that Request(..., stream=...) does not auto-generate any headers on the request instance. (Pull #1607)

    Fixed

    • Pass through timeout=... in top-level httpx.stream() function. (Pull #1613)
    • Map httpcore transport close exceptions to httpx exceptions. (Pull #1606)
    Source code(tar.gz)
    Source code(zip)
  • 0.18.0(Apr 27, 2021)

    0.18.0 (27th April, 2021)

    The 0.18.x release series formalises our low-level Transport API, introducing the base classes httpx.BaseTransport and httpx.AsyncBaseTransport.

    See the "Writing custom transports" documentation and the httpx.BaseTransport.handle_request() docstring for more complete details on implementing custom transports.

    Pull request #1522 includes a checklist of differences from the previous httpcore transport API, for developers implementing custom transports.

    The following API changes have been issuing deprecation warnings since 0.17.0 onwards, and are now fully deprecated...

    • You should now use httpx.codes consistently instead of httpx.StatusCodes.
    • Use limits=... instead of pool_limits=....
    • Use proxies={"http://": ...} instead of proxies={"http": ...} for scheme-specific mounting.

    Changed

    • Transport instances now inherit from httpx.BaseTransport or httpx.AsyncBaseTransport, and should implement either the handle_request method or handle_async_request method. (Pull #1522, #1550)
    • The response.ext property and Response(ext=...) argument are now named extensions. (Pull #1522)
    • The recommendation to not use data=<bytes|str|bytes (a)iterator> in favour of content=<bytes|str|bytes (a)iterator> has now been escalated to a deprecation warning. (Pull #1573)
    • Drop Response(on_close=...) from API, since it was a bit of leaking implementation detail. (Pull #1572)
    • When using a client instance, cookies should always be set on the client, rather than on a per-request basis. We prefer enforcing a stricter API here because it provides clearer expectations around cookie persistence, particularly when redirects occur. (Pull #1574)
    • The runtime exception httpx.ResponseClosed is now named httpx.StreamClosed. (#1584)
    • The httpx.QueryParams model now presents an immutable interface. The is a discussion on the design and motivation here. Use client.params = client.params.merge(...) instead of client.params.update(...). The basic query manipulation methods are query.set(...), query.add(...), and query.remove(). (#1600)

    Added

    • The Request and Response classes can now be serialized using pickle. (#1579)
    • Handle data={"key": [None|int|float|bool]} cases. (Pull #1539)
    • Support httpx.URL(**kwargs), for example httpx.URL(scheme="https", host="www.example.com", path="/'), or httpx.URL("https://www.example.com/", username="[email protected]", password="123 456"). (Pull #1601)
    • Support url.copy_with(params=...). (Pull #1601)
    • Add url.params parameter, returning an immutable QueryParams instance. (Pull #1601)
    • Support query manipulation methods on the URL class. These are url.copy_set_param(), url.copy_add_param(), url.copy_remove_param(), url.copy_merge_params(). (Pull #1601)
    • The httpx.URL class now performs port normalization, so :80 ports are stripped from http URLs and :443 ports are stripped from https URLs. (Pull #1603)
    • The URL.host property returns unicode strings for internationalized domain names. The URL.raw_host property returns byte strings with IDNA escaping applied. (Pull #1590)

    Fixed

    • Fix Content-Length for cases of files=... where unicode string is used as the file content. (Pull #1537)
    • Fix some cases of merging relative URLs against Client(base_url=...). (Pull #1532)
    • The request.content attribute is now always available except for streaming content, which requires an explicit .read(). (Pull #1583)
    Source code(tar.gz)
    Source code(zip)
  • 0.17.1(Mar 15, 2021)

    0.17.1

    Fixed

    • Type annotation on CertTypes allows keyfile and password to be optional. (Pull #1503)
    • Fix httpcore pinned version. (Pull #1495)
    Source code(tar.gz)
    Source code(zip)
  • 0.17.0(Feb 28, 2021)

    0.17.0

    Added

    • Add httpx.MockTransport(), allowing to mock out a transport using pre-determined responses. (Pull #1401, Pull #1449)
    • Add httpx.HTTPTransport() and httpx.AsyncHTTPTransport() default transports. (Pull #1399)
    • Add mount API support, using httpx.Client(mounts=...). (Pull #1362)
    • Add chunk_size parameter to iter_raw(), iter_bytes(), iter_text(). (Pull #1277)
    • Add keepalive_expiry parameter to httpx.Limits() configuration. (Pull #1398)
    • Add repr to httpx.Cookies to display available cookies. (Pull #1411)
    • Add support for params=<tuple> (previously only params=<list> was supported). (Pull #1426)

    Fixed

    • Add missing raw_path to ASGI scope. (Pull #1357)
    • Tweak create_ssl_context defaults to use trust_env=True. (Pull #1447)
    • Properly URL-escape WSGI PATH_INFO. (Pull #1391)
    • Properly set default ports in WSGI transport. (Pull #1469)
    • Properly encode slashes when using base_url. (Pull #1407)
    • Properly map exceptions in request.aclose(). (Pull #1465)
    Source code(tar.gz)
    Source code(zip)
  • 0.16.1(Oct 8, 2020)

    0.16.1 (October 8th, 2020)

    Fixed

    • Support literal IPv6 addresses in URLs. (Pull #1349)
    • Force lowercase headers in ASGI scope dictionaries. (Pull #1351)
    Source code(tar.gz)
    Source code(zip)
  • 0.16.0(Oct 6, 2020)

    0.16.0 (October 6th, 2020)

    Changed

    • Preserve HTTP header casing. (Pull #1338, encode/httpcore#216, python-hyper/h11#104)
    • Drop response.next() and response.anext() methods in favour of response.next_request attribute. (Pull #1339)
    • Closed clients now raise a runtime error if attempting to send a request. (Pull #1346)

    Added

    • Add Python 3.9 to officially supported versions.
    • Type annotate __enter__/__exit__/__aenter__/__aexit__ in a way that supports subclasses of Client and AsyncClient. (Pull #1336)
    Source code(tar.gz)
    Source code(zip)
  • 0.15.5(Oct 1, 2020)

  • 0.15.4(Sep 25, 2020)

    0.15.4 (September 25th, 2020)

    Added

    • Support direct comparisons between Headers and dicts or lists of two-tuples. Eg. assert response.headers == {"Content-Length": 24} (Pull #1326)

    Fixed

    • Fix automatic .read() when Response instances are created with content=<str> (Pull #1324)
    Source code(tar.gz)
    Source code(zip)
  • 0.15.3(Sep 24, 2020)

  • 0.15.2(Sep 23, 2020)

    0.15.2 (September 23nd, 2020)

    Fixed

    • Fixed response.elapsed property. (Pull #1313)
    • Fixed client authentication interaction with .stream(). (Pull #1312)
    Source code(tar.gz)
    Source code(zip)
  • 0.15.1(Sep 23, 2020)

    0.15.1 (September 23nd, 2020)

    Fixed

    • ASGITransport now properly applies URL decoding to the path component, as-per the ASGI spec. (Pull #1307)
    Source code(tar.gz)
    Source code(zip)
  • 0.15.0(Sep 22, 2020)

    0.15.0 (22nd September 2020)

    Added

    • Added support for curio. (Pull https://github.com/encode/httpcore/pull/168)
    • Added support for event hooks. (Pull #1246)
    • Added support for authentication flows which require either sync or async I/O. (Pull #1217)
    • Added support for monitoring download progress with response.num_bytes_downloaded. (Pull #1268)
    • Added Request(content=...) for byte content, instead of overloading Request(data=...) (Pull #1266)
    • Added support for all URL components as parameter names when using url.copy_with(...). (Pull #1285)
    • Neater split between automatically populated headers on Request instances, vs default client.headers. (Pull #1248)
    • Unclosed AsyncClient instances will now raise warnings if garbage collected. (Pull #1197)
    • Support Response(content=..., text=..., html=..., json=...) for creating usable response instances in code. (Pull #1265, #1297)
    • Support instantiating requests from the low-level transport API. (Pull #1293)
    • Raise errors on invalid URL types. (Pull #1259)

    Changed

    • Cleaned up expected behaviour for URL escaping. url.path is now URL escaped. (Pull #1285)
    • Cleaned up expected behaviour for bytes vs str in URL components. url.userinfo and url.query are not URL escaped, and so return bytes. (Pull #1285)
    • Drop url.authority property in favour of url.netloc, since "authority" was semantically incorrect. (Pull #1285)
    • Drop url.full_path property in favour of url.raw_path, for better consistency with other parts of the API. (Pull #1285)
    • No longer use the chardet library for auto-detecting charsets, instead defaulting to a simpler approach when no charset is specified. (#1269)

    Fixed

    • Swapped ordering of redirects and authentication flow. (Pull #1267)
    • .netrc lookups should use host, not host+port. (Pull #1298)

    Removed

    • The URLLib3Transport class no longer exists. We've published it instead as an example of a custom transport class. (Pull #1182)
    • Drop request.timer attribute, which was being used internally to set response.elapsed. (Pull #1249)
    • Drop response.decoder attribute, which was being used internally. (Pull #1276)
    • Request.prepare() is now a private method. (Pull #1284)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.3(Sep 2, 2020)

    0.14.3 (September 2nd, 2020)

    Added

    • http.Response() may now be instantiated without a request=... parameter. Useful for some unit testing cases. (Pull #1238)
    • Add 103 Early Hints and 425 Too Early status codes. (Pull #1244)

    Fixed

    • DigestAuth now handles responses that include multiple 'WWW-Authenticate' headers. (Pull #1240)
    • Call into transport __enter__/__exit__ or __aenter__/__aexit__ when client is used in a context manager style. (Pull #1218)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.2(Aug 24, 2020)

    0.14.2 (August 24th, 2020)

    Added

    • Support client.get(..., auth=None) to bypass the default authentication on a clients. (Pull #1115)
    • Support client.auth = ... property setter. (Pull #1185)
    • Support httpx.get(..., proxies=...) on top-level request functions. (Pull #1198)
    • Display instances with nicer import styles. (Eg. <httpx.ReadTimeout ...>) (Pull #1155)
    • Support cookies=[(key, value)] list-of-two-tuples style usage. (Pull #1211)

    Fixed

    • Ensure that automatically included headers on a request may be modified. (Pull #1205)
    • Allow explicit Content-Length header on streaming requests. (Pull #1170)
    • Handle URL quoted usernames and passwords properly. (Pull #1159)
    • Use more consistent default for HEAD requests, setting allow_redirects=True. (Pull #1183)
    • If a transport error occurs while streaming the response, raise an httpx exception, not the underlying httpcore exception. (Pull #1190)
    • Include the underlying httpcore traceback, when transport exceptions occur. (Pull #1199)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.1(Aug 11, 2020)

    0.14.1 (August 11th, 2020)

    Added

    • The httpx.URL(...) class now raises httpx.InvalidURL on invalid URLs, rather than exposing the underlying rfc3986 exception. If a redirect response includes an invalid 'Location' header, then a RemoteProtocolError exception is raised, which will be associated with the request that caused it. (Pull #1163)

    Fixed

    • Handling multiple Set-Cookie headers became broken in the 0.14.0 release, and is now resolved. (Pull #1156)
    Source code(tar.gz)
    Source code(zip)
  • 0.14.0(Aug 7, 2020)

    0.14.0 (August 7th, 2020)

    The 0.14 release includes a range of improvements to the public API, intended on preparing for our upcoming 1.0 release.

    • Our HTTP/2 support is now fully optional. You now need to use pip install httpx[http2] if you want to include the HTTP/2 dependancies.
    • Our HSTS support has now been removed. Rewriting URLs from http to https if the host is on the HSTS list can be beneficial in avoiding roundtrips to incorrectly formed URLs, but on balance we've decided to remove this feature, on the principle of least surprise. Most programmatic clients do not include HSTS support, and for now we're opting to remove our support for it.
    • Our exception hierarchy has been overhauled. Most users will want to stick with their existing httpx.HTTPError usage, but we've got a clearer overall structure now. See https://www.python-httpx.org/exceptions/ for more details.

    When upgrading you should be aware of the following public API changes. Note that deprecated usages will currently continue to function, but will issue warnings.

    • You should now use httpx.codes consistently in favour of httpx.StatusCodes.
    • Usage of httpx.Timeout() should now always include an explicit default. Eg. httpx.Timeout(None, pool=5.0).
    • When using httpx.Timeout(), we now have more concisely named keyword arguments. Eg. read=5.0, instead of read_timeout=5.0.
    • Use httpx.Limits() instead of httpx.PoolLimits(), and limits=... instead of pool_limits=....
    • The httpx.Limits(max_keepalive=...) argument is now deprecated in favour of a more explicit httpx.Limits(max_keepalive_connections=...)
    • Keys used with Client(proxies={...}) should now be in the style of {"http://": ...}, rather than {"http": ...}.
    • The multidict methods Headers.getlist() and QueryParams.getlist() are deprecated in favour of more consistent .get_list() variants.
    • The URL.is_ssl property is deprecated in favour of URL.scheme == "https".
    • The URL.join(relative_url=...) method is now URL.join(url=...). This change does not support warnings for the deprecated usage style.

    One notable aspect of the 0.14.0 release is that it tightens up the public API for httpx, by ensuring that several internal attributes and methods have now become strictly private.

    The following previously had nominally public names on the client, but were all undocumented and intended solely for internal usage. They are all now replaced with underscored names, and should not be relied on or accessed.

    These changes should not affect users who have been working from the httpx documentation.

    • .merge_url(), .merge_headers(), .merge_cookies(), .merge_queryparams()
    • .build_auth(), .build_redirect_request()
    • .redirect_method(), .redirect_url(), .redirect_headers(), .redirect_stream()
    • .send_handling_redirects(), .send_handling_auth(), .send_single_request()
    • .init_transport(), .init_proxy_transport()
    • .proxies, .transport, .netrc, .get_proxy_map()

    See pull requests #997, #1065, #1071.

    Some areas of API which were already on the deprecation path, and were raising warnings or errors in 0.13.x have now been escalated to being fully removed.

    • Drop ASGIDispatch, WSGIDispatch, which have been replaced by ASGITransport, WSGITransport.
    • Drop dispatch=...`` on client, which has been replaced bytransport=...``
    • Drop soft_limit, hard_limit, which have been replaced by max_keepalive and max_connections.
    • Drop Response.stream andResponse.raw, which have been replaced by ``.aiter_bytes and .aiter_raw.
    • Drop proxies=<transport instance> in favor of proxies=httpx.Proxy(...).

    See pull requests #1057, #1058.

    ###Β Added

    • Added dedicated exception class httpx.HTTPStatusError for .raise_for_status() exceptions. (Pull #1072)
    • Added httpx.create_ssl_context() helper function. (Pull #996)
    • Support for proxy exlcusions like proxies={"https://www.example.com": None}. (Pull #1099)
    • Support QueryParams(None) and client.params = None. (Pull #1060)

    Changed

    • Use httpx.codes consistently in favour of httpx.StatusCodes which is placed into deprecation. (Pull #1088)
    • Usage of httpx.Timeout() should now always include an explicit default. Eg. httpx.Timeout(None, pool=5.0). (Pull #1085)
    • Switch to more concise httpx.Timeout() keyword arguments. Eg. read=5.0, instead of read_timeout=5.0. (Pull #1111)
    • Use httpx.Limits() instead of httpx.PoolLimits(), and limits=... instead of pool_limits=.... (Pull #1113)
    • The httpx.Limits(max_keepalive=...) argument is now deprecated in favour of a more explicit httpx.Limits(max_keepalive_connections=...).
    • Keys used with Client(proxies={...}) should now be in the style of {"http://": ...}, rather than {"http": ...}. (Pull #1127)
    • The multidict methods Headers.getlist and QueryParams.getlist are deprecated in favour of more consistent .get_list() variants. (Pull #1089)
    • URL.port becomes Optional[int]. Now only returns a port if one is explicitly included in the URL string. (Pull #1080)
    • The URL(..., allow_relative=[bool]) parameter no longer exists. All URL instances may be relative. (Pull #1073)
    • Drop unnecessary url.full_path = ... property setter. (Pull #1069)
    • The URL.join(relative_url=...) method is now URL.join(url=...). (Pull #1129)
    • The URL.is_ssl property is deprecated in favour of URL.scheme == "https". (Pull #1128)

    Fixed

    • Add missing Response.next() method. (Pull #1055)
    • Ensure all exception classes are exposed as public API. (Pull #1045)
    • Support multiple items with an identical field name in multipart encodings. (Pull #777)
    • Skip HSTS preloading on single-label domains. (Pull #1074)
    • Fixes for Response.iter_lines(). (Pull #1033, #1075)
    • Ignore permission errors when accessing .netrc files. (Pull #1104)
    • Allow bare hostnames in HTTP_PROXY etc... environment variables. (Pull #1120)
    • Settings app=... or transport=... bypasses any environment based proxy defaults. (Pull #1122)
    • Fix handling of .base_url when a path component is included in the base URL. (Pull #1130)
    Source code(tar.gz)
    Source code(zip)
  • 0.13.3(May 29, 2020)

    0.13.3 (May 29th, 2020)

    Fixed

    • Include missing keepalive expiry configuration. (Pull #1005)
    • Improved error message when URL redirect has a custom scheme. (Pull #1002)
    Source code(tar.gz)
    Source code(zip)
Owner
Encode
Collaboratively funded software development.
Encode
Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Introduction httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries. HTTP and HTTPS HTTP

null 457 Dec 10, 2022
Python requests like API built on top of Twisted's HTTP client.

treq: High-level Twisted HTTP Client API treq is an HTTP library inspired by requests but written on top of Twisted's Agents. It provides a simple, hi

Twisted Matrix Labs 553 Dec 18, 2022
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.1k Jan 1, 2023
Aiosonic - lightweight Python asyncio http client

aiosonic - lightweight Python asyncio http client Very fast, lightweight Python asyncio http client Here is some documentation. There is a performance

Johanderson Mogollon 93 Jan 6, 2023
As easy as /aitch-tee-tee-pie/ πŸ₯§ Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie

HTTPie: human-friendly CLI HTTP client for the API era HTTPie (pronounced aitch-tee-tee-pie) is a command-line HTTP client. Its goal is to make CLI in

HTTPie 25.4k Jan 1, 2023
A minimal HTTP client. βš™οΈ

HTTP Core Do one thing, and do it well. The HTTP Core package provides a minimal low-level HTTP client, which does one thing only. Sending HTTP reques

Encode 306 Dec 27, 2022
An interactive command-line HTTP and API testing client built on top of HTTPie featuring autocomplete, syntax highlighting, and more. https://twitter.com/httpie

HTTP Prompt HTTP Prompt is an interactive command-line HTTP client featuring autocomplete and syntax highlighting, built on HTTPie and prompt_toolkit.

HTTPie 8.6k Dec 31, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Dec 29, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

urllib3 3.2k Jan 2, 2023
Asynchronous Python HTTP Requests for Humans using Futures

Asynchronous Python HTTP Requests for Humans Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the ba

Ross McFarland 2k Dec 30, 2022
HTTP/2 for Python.

Hyper: HTTP/2 Client for Python This project is no longer maintained! Please use an alternative, such as HTTPX or others. We will not publish further

Hyper 1k Dec 23, 2022
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (>=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Benoit Chesneau 334 Dec 24, 2022
πŸ”„ 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

?? ?? Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

Hackers and Slackers 15 Dec 12, 2022
A Python obfuscator using HTTP Requests and Hastebin.

?? Jawbreaker ?? Jawbreaker is a Python obfuscator written in Python3, using double encoding in base16, base32, base64, HTTP requests and a Hastebin-l

Billy 50 Sep 28, 2022
Python package for caching HTTP response based on etag

Etag cache implementation for HTTP requests, to save request bandwidth for a non-modified response. Returns high-speed accessed dictionary data as cache.

Rakesh R 2 Apr 27, 2022
A simple, yet elegant HTTP library.

Requests Requests is a simple, yet elegant HTTP library. >>> import requests >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')

Python Software Foundation 48.8k Jan 5, 2023
Fast HTTP parser

httptools is a Python binding for the nodejs HTTP parser. The package is available on PyPI: pip install httptools. APIs httptools contains two classes

magicstack 1.1k Jan 7, 2023
HTTP Request Smuggling Detection Tool

HTTP Request Smuggling Detection Tool HTTP request smuggling is a high severity vulnerability which is a technique where an attacker smuggles an ambig

Anshuman Pattnaik 282 Jan 3, 2023
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once

pathprober Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once. Purpose Brute-forcing webs

NFA 41 Jul 6, 2022