Official mirror of https://gitlab.com/pgjones/hypercorn https://pgjones.gitlab.io/hypercorn/

Overview

Hypercorn

Hypercorn logo

Build Status docs pypi http python license

Hypercorn is an ASGI web server based on the sans-io hyper, h11, h2, and wsproto libraries and inspired by Gunicorn. Hypercorn supports HTTP/1, HTTP/2, WebSockets (over HTTP/1 and HTTP/2), ASGI/2, and ASGI/3 specifications. Hypercorn can utilise asyncio, uvloop, or trio worker types.

Hypercorn can optionally serve the current draft of the HTTP/3 specification using the aioquic library. To enable this install the h3 optional extra, pip install hypercorn[h3] and then choose a quic binding e.g. hypercorn --quic-bind localhost:4433 ....

Hypercorn was initially part of Quart before being separated out into a standalone ASGI server. Hypercorn forked from version 0.5.0 of Quart.

Quickstart

Hypercorn can be installed via pip,

$ pip install hypercorn

and requires Python 3.7.0 or higher.

With hypercorn installed ASGI frameworks (or apps) can be served via Hypercorn via the command line,

$ hypercorn module:app

Alternatively Hypercorn can be used programatically,

import asyncio
from hypercorn.config import Config
from hypercorn.asyncio import serve

from module import app

asyncio.run(serve(app, Config()))

learn more (including a Trio example of the above) in the API usage docs.

Contributing

Hypercorn is developed on GitLab. If you come across an issue, or have a feature request please open an issue. If you want to contribute a fix or the feature-implementation please do (typo fixes welcome), by proposing a merge request.

Testing

The best way to test Hypercorn is with Tox,

$ pipenv install tox
$ tox

this will check the code style and run the tests.

Help

The Hypercorn documentation is the best place to start, after that try searching stack overflow, if you still can't find an answer please open an issue.

Comments
  • start_next_cycle() results in h11._util.LocalProtocolError: not in a reusable state

    start_next_cycle() results in h11._util.LocalProtocolError: not in a reusable state

    A LocalProtocolError("not in a reusable state") exception is regularly raised from the start_next_cycle() call in hypercorn.asyncio.H11Server.recycle_or_close().

    Could this be a state problem in hypercorn? A similar issue affected some example code of the h11 project (https://github.com/python-hyper/h11/issues/70). I'm seeing a lot of these exceptions (10 in a 22 hour period) but I'm not sure what the impact is on my code as I've not finished investigating the side-effect. At the moment I don't think I've lost any data but what's causing the state exception?

    asyncio ERROR # Exception in callback H11Server.recycle_or_close(<Task finishe...> result=None>)
    handle: <Handle H11Server.recycle_or_close(<Task finishe...> result=None>)>
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/asyncio/events.py", line 88, in _run
        self._context.run(self._callback, *self._args)
      File "/usr/local/lib/python3.7/site-packages/hypercorn/asyncio/h11.py", line 103, in recycle_or_close
        self.connection.start_next_cycle()
      File "/usr/local/lib/python3.7/site-packages/h11/_connection.py", line 204, in start_next_cycle
        self._cstate.start_next_cycle()
      File "/usr/local/lib/python3.7/site-packages/h11/_state.py", line 298, in start_next_cycle
        raise LocalProtocolError("not in a reusable state")
    h11._util.LocalProtocolError: not in a reusable state
    

    I'm using hypercorn 0.6.0 and it's pulled in h11 0.8.1

    opened by alanbchristie 31
  • A

    A "while_serving" decorator?

    What do you think about having another decorator like this that would allow you to control when server is shut down

    This is useful when you need a server up for to X seconds, rather than forever. I think you could spin up a thread with a timer and have that call loop.stop() when done, but it's dirty.

    @app.while_serving
    async def shutdown_when_idle_for_seconds():
        while True:
            if condition:
                raise Shutdown
            await sleep(1)
    

    Great for testing!

    opened by jonozzz 17
  • hypercorn not working on Windows 10 ?

    hypercorn not working on Windows 10 ?

    hi,

    trying on Python-3.7 and 3.8rc the "hello" example, I fall on "add_signal_handler" not implemented in asyncio. is it Normal ?

    I had hopes hypercorn was ok under windows, so I may have done something wrong ?

      File "C:\WinP\bd38\bucod\WPy64-3800rc1\python-3.8.0rc1.amd64\lib\site-packages\hypercorn\asyncio\run.py", line 212, in _run
        loop.add_signal_handler(signal.SIGINT, _signal_handler)
      File "C:\WinP\bd38\bucod\WPy64-3800rc1\python-3.8.0rc1.amd64\lib\asyncio\events.py", line 536, in add_signal_handler
        raise NotImplementedError
    NotImplementedError
    

    hello.py

    async def app(scope, receive, send):
        if scope["type"] != "http":
            raise Exception("Only the HTTP protocol is supported")
    
        await send({
            'type': 'http.response.start',
            'status': 200,
            'headers': [
                (b'content-type', b'text/plain'),
                (b'content-length', b'5'),
            ],
        })
        await send({
            'type': 'http.response.body',
            'body': b'hello',
        })
    
    opened by stonebig 13
  • HTTP/2 usage and Windows support?

    HTTP/2 usage and Windows support?

    Hi

    I just found your excelent tuturial on how to use Quart with gunicorn, which I know isn't supported on Windows 10 so decided to attempt to use hypercorn instead. Unfortunately it's crashing giving this error.


    hypercorn --keyfile key.pem --certfile cert.pem --ciphers ECDHE+AESGCM --bind localhost:5000 http2test:app Running on https://localhost:5000 (CTRL + C to quit) Traceback (most recent call last): File "c:\users\x\appdata\local\programs\python\python37-32\Lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\runpy.py", line 85, in run_code exec(code, run_globals) File "C:\Users\x.virtualenvs\QuartTest-s-3dKnLB\Scripts\hypercorn.exe_main.py", line 9, in File "c:\users\x.virtualenvs\quarttest-s-3dknlb\lib\site-packages\hypercorn_main.py", line 159, in main run_multiple(config) File "c:\users\x.virtualenvs\quarttest-s-3dknlb\lib\site-packages\hypercorn\run.py", line 234, in run_multiple process.start() File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\popen_spawn_win32.py", line 65, in init reduction.dump(process_obj, to_child) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle SSLContext objects

    Traceback (most recent call last): File "", line 1, in File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\spawn.py", line 99, in spawn_main new_handle = reduction.steal_handle(parent_pid, pipe_handle) File "c:\users\x\appdata\local\programs\python\python37-32\Lib\multiprocessing\reduction.py", line 87, in steal_handle _winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE) PermissionError: [WinError 5] Access is denied

    Is there a working usage example? It's working fine with http1.

    Thanks and keep up the great work!

    opened by McSpidey 13
  • Task was destroyed but it is pending!

    Task was destroyed but it is pending!

    I'm running a pretty simple quart app with hypercorn on a RHEL7 linux server. quart = 0.10.0 hypercorn = 0.9.0

    I keep seeing these messages printed to standard out randomly every couple requests.

    Task was destroyed but it is pending!
    task: <Task pending coro=<ProtocolWrapper.send_task() done, defined at /root/.local/share/virtualenvs/atomix-Zh2dG3yU/lib/python3.7/site-packages/hypercorn/protocol/__init__.py:58> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f08fc3db1d0>()]>>
    Task was destroyed but it is pending!
    task: <Task pending coro=<ProtocolWrapper.send_task() done, defined at /root/.local/share/virtualenvs/atomix-Zh2dG3yU/lib/python3.7/site-packages/hypercorn/protocol/__init__.py:58> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f08f55188d0>()]>>
    

    I'm pretty sure it has to do with hypercorn because I don't see the messages when I run the app using app.run(...). I'm having trouble narrowing down the problem anymore than that... Is anyone else seeing these messages?

    Every once in a while I have a json request fail. The response body is mysteriously empty. The content-length is set to the correct non 0 value but nothing is in the body. I'm not sure if its related to this or not.

    opened by micahdlamb 11
  • Hypercorn runs with duplicated process

    Hypercorn runs with duplicated process

    Hi all,

    I am not sure whether this is really hypercorn issue, but could not imagine what else can be so please bear with me.

    I am running a server with hypercorn on Ubuntu 20.04.

    The problem is that it is runs with a duplicated process in background.

    root     2278497  0.8  0.1  41872 33568 pts/7    S    10:03   0:00 /usr/bin/python3 /usr/local/bin/hypercorn -c config.toml main:app --reload
    root     2278499  0.0  0.0  17304 11332 pts/7    S    10:03   0:00 /usr/bin/python3 -c from multiprocessing.resource_tracker import main;main(4)
    root     2278500  0.7  0.1  41648 34148 pts/7    S    10:03   0:00 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=5, pipe_handle=7) --multiprocessing-fork
    

    The main process is 2278497, but there are duplicated processes 2278499 and 2278450. This causes unwanted effects by executing twice the same tasks.

    How can I avoid that?

    EDIT A minimal example:

    # test_main.py
    from fastapi import FastAPI
    
    app = FastAPI()
    
    @app.get("/")
    async def root():
        return {"message": "Hello World"}
    
    print("main module loaded.")
    

    I then type:

    sudo hypercorn test_main:app

    and the stdout is:

    main module loaded.
    main module loaded.
    [2022-11-02 15:08:45 +0100] [2364437] [INFO] Running on http://127.0.0.1:8000 (CTRL + C to quit)
    

    If I use uvicorn the message is printed only one time, as expected:

    $ sudo uvicorn test_main:app
    
    main module loaded.
    INFO:     Started server process [2364692]
    INFO:     Waiting for application startup.
    INFO:     Application startup complete.
    INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
    
    opened by stevstrong 10
  • Request freezes if I get throttled by an extra service

    Request freezes if I get throttled by an extra service

    Hi @pgjones, I am currently using hypercorn with a Discord server that I created with quart. I have an a code excerpt that looks like this:

    
    @app.route("/create_invite", methods=['GET'])
    async def create_invite():
        """
        create an invite for the guild.
        """
        # wait_until_ready and check for valid connection is missing here
        guilds = client.guilds
        guild = guilds[0]
        link_str = 'n/a'
        for channel in guild.channels:
            if channel.name == 'general':
                print('creating invite')
                link = await channel.create_invite()
                link_str = str(link)
                break
        return jsonify({'invite_link': link_str}), 200
    

    However, I think after 5 requests (link = await channel.create_invite()), I get throttled. When I run python main.py normally, I have to wait a bit of extra time; but when I run with hypercorn, I have to wait indefinitely. How can I fix this?

    opened by peasant98 8
  • Run hypercorn with multiple workers on Windows - WinError 10022

    Run hypercorn with multiple workers on Windows - WinError 10022

    Describe the bug Hypercorn doesn't seem to be able to start with multiple workers on Windows. Seems like the socket is not ready to bind and Windows, therefore, throws an exception. This may be different on Unix systems.

    Code snippet Starlette Hello World

    main.py

    from starlette.applications import Starlette
    from starlette.responses import JSONResponse
    
    app = Starlette(debug=True)
    
    
    @app.route('/')
    async def homepage(request):
        return JSONResponse({'hello': 'world'})
    
    

    Expected behavior Hypercorn starts and is ready to serve with multiple workers.

    Environment

    • Windows 10
    • Hypercorn: 0.5.3
    • Python 3.6.6 (same issue with Python 3.7.2)
    • pipenv
    • Starlette 0.11.4

    Additional context

    Command: pipenv run hypercorn main:app -w 2

    Exception:

    Running on 127.0.0.1:8000 over http (CTRL + C to quit)
    Process Process-2:
    
    Traceback (most recent call last):
      File "Python36_64\Lib\multiprocessing\process.py", line 258, in _bootstrap
        self.run()
    	
      File "Python36_64\Lib\multiprocessing\process.py", line 93, in run
        self._target(*self._args, **self._kwargs)
    	
      File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 214, in asyncio_worker
        debug=config.debug,
    	
      File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 244, in _run
        loop.run_until_complete(main)
    	
      File "Python36_64\Lib\asyncio\base_events.py", line 468, in run_until_complete
        return future.result()
    	
      File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 178, in worker_serve
        for sock in sockets
    	
      File ".virtualenvs\starlettehelloworld-wohnvuzv\lib\site-packages\hypercorn\asyncio\run.py", line 178, in <listcomp>
        for sock in sockets
    	
      File "Python36_64\Lib\asyncio\base_events.py", line 1065, in create_server
        sock.listen(backlog)
    	
    OSError: [WinError 10022] An invalid argument was supplied
    
    
    opened by marodev 8
  • --reload not working with Quart

    --reload not working with Quart

    I've been trying to get Hypercorn + Quart autoreloading on code change working and think I've found a bug.

    The Hypercorn usage doc suggests the correct flag is "--reload" https://github.com/pgjones/hypercorn/blob/master/docs/usage.rst

    When I use this it starts fine but the moment I edit the Quart source code file and save it Hypercorn crashes with this error: "unknown option --reload"

    I've checked the Quart docs (https://pgjones.gitlab.io/quart/source/quart.app.html ) and it suggests the internal flag may be "use_reloader", however if I try that with Hypercorn it doesn't start, giving this error. "hypercorn: error: unrecognized arguments: --use_reloader"

    In case it was a mismatch between versions I just tried uninstalling and re-installing both Quart and Hypercorn but there was no change. What's the best way to go about this?

    opened by McSpidey 8
  • PicklingError on Ubuntu

    PicklingError on Ubuntu

    Traceback (most recent call last):
    File "/layers/[google.python.pip/pip/bin/hypercorn](http://google.python.pip/pip/bin/hypercorn)", line 8, in <module>
    sys.exit(main())
    File "/layers/[google.python.pip/pip/lib/python3.9/site-packages/hypercorn/](http://google.python.pip/pip/lib/python3.9/site-packages/hypercorn/)main.py", line 287, in main
    run(config)
    File "/layers/[google.python.pip/pip/lib/python3.9/site-packages/hypercorn/run.py](http://google.python.pip/pip/lib/python3.9/site-packages/hypercorn/run.py)", line 53, in run
    processes = start_processes(config, worker_func, sockets, shutdown_event, ctx)
    File "/layers/[google.python.pip/pip/lib/python3.9/site-packages/hypercorn/run.py](http://google.python.pip/pip/lib/python3.9/site-packages/hypercorn/run.py)", line 95, in start_processes
    process.start()
    File "/opt/python3.9/lib/python3.9/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
    File "/opt/python3.9/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
    File "/opt/python3.9/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in init
    super().init(process_obj)
    File "/opt/python3.9/lib/python3.9/multiprocessing/popen_fork.py", line 19, in init
    self._launch(process_obj)
    File "/opt/python3.9/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
    File "/opt/python3.9/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
    _pickle.PicklingError: Can't pickle <function getAvailableMemory at 0x3e3cacbe7550>: import of module 'module.name' failed
    
    opened by pgjones 7
  • Fails to load cert chain when using signed certificates

    Fails to load cert chain when using signed certificates

    Hi I believe there is bug at

    https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L158

    when I use quart with signed certificate in quart as follows

    app.run(ca_certs='ca.crt', certfile='cert.crt', kefile='key.pem')

    I get the following

    hypercorn/config.py", line 158, in create_ssl_context context.load_cert_chain(certfile=self.certfile, keyfile=self.keyfile) ssl.SSLError: [SSL] PEM lib (_ssl.c:3824)

    I believe line https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L160 should be called before

    https://github.com/pgjones/hypercorn/blob/master/hypercorn/config.py#L158

    anyways I have reverted to unsigned certs for now and will probably just use gunicorn but I thought I would let you know about this bug and thank you for your quart project which I am really loving

    opened by tjtaill 7
  • Is it possible to customize the error log?

    Is it possible to customize the error log?

    How can I customize my error logs? In my access logs I have a header called x-trace-id, if it gives an error, I want to see that trace-id in the logs but I couldn't do it.

    accesslog = 'app/storage/logs/access.log' errorlog = 'app/storage/logs/error.log' access_log_format = '~ %({x-forwarded-for}i)s ~ %({x-request-id}o)s ~ "%(r)s" %(s)s' umask = 0o771

    access.log

    [2022-11-25 00:59:03 +0300] [19978] [INFO] ~ 127.0.0.1 ~ 8f5c3abcbcbf47dabb5423ee3fde00e2 ~ "POST /api/v1/login 1.0" 500

    error.log

    [2022-11-25 00:59:03 +0300] [18440] [ERROR] Error in ASGI Framework

    opened by turkic-dev 0
  • Access log prints only part of request path when using FastAPI subapps

    Access log prints only part of request path when using FastAPI subapps

    Hello, I'm using FastAPI subapps and Hypercorn access log shows only part of a request path (relative to a subapp root_path). Here is an example of request object passed to AccessLogAtoms constructor, the access log looks like

    127.0.0.1:57296 - "GET /donation/93271720-ed81-4c4a-9b66-82a57088ea1f 1.1" 0.943398 200 207784 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0"
    

    Corresponding request object:

    {'app': <fastapi.applications.FastAPI object at 0x7f163110b8b0>,
     'app_root_path': '',
     'asgi': {'spec_version': '2.1', 'version': '3.0'},
     'client': ('127.0.0.1', 37846),
     'endpoint': <function donation_image at 0x7f16313df400>,
     'extensions': {},
     'fastapi_astack': <contextlib.AsyncExitStack object at 0x7f162f5edd20>,
     'headers': [(b'host', b'localhost:8000'),
                 (b'connection', b'keep-alive'),
                 (b'upgrade-insecure-requests', b'1'),
                 (b'user-agent',
                  b'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, l'
                  b'ike Gecko) HeadlessChrome/108.0.5359.29 Safari/537.36'),
                 (b'accept',
                  b'text/html,application/xhtml+xml,application/xml;q=0.9,image/'
                  b'avif,image/webp,image/apng,*/*;q=0.8,application/signed-exch'
                  b'ange;v=b3;q=0.9'),
                 (b'sec-fetch-site', b'none'),
                 (b'sec-fetch-mode', b'navigate'),
                 (b'sec-fetch-user', b'?1'),
                 (b'sec-fetch-dest', b'document'),
                 (b'accept-encoding', b'gzip, deflate, br')],
     'http_version': '1.1',
     'method': 'GET',
     'path': '/donation/93271720-ed81-4c4a-9b66-82a57088ea1f',
     'path_params': {'donation_id': '93271720-ed81-4c4a-9b66-82a57088ea1f'},
     'query_string': b'',
     'raw_path': b'/preview/donation/93271720-ed81-4c4a-9b66-82a57088ea1f',
     'root_path': '/preview',
     'route': <fastapi.routing.APIRoute object at 0x7f1630fe1f90>,
     'router': <fastapi.routing.APIRouter object at 0x7f163110b910>,
     'scheme': 'http',
     'server': ('127.0.0.1', 8000),
     'type': 'http'}
    

    With a such behavior it's difficult to differ similar subpaths from different subapps in access log.

    opened by nikicat 1
  • Error deploying ver 0.14.3 on DigitalOcean Apps Platform

    Error deploying ver 0.14.3 on DigitalOcean Apps Platform

    After upgrading from ver. 0.11.2 to 0.14.3, my application container does not start on DO Apps with the following error:

    Traceback (most recent call last):
      File "/usr/local/bin/hypercorn", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.8/site-packages/hypercorn/__main__.py", line 287, in main
        run(config)
      File "/usr/local/lib/python3.8/site-packages/hypercorn/run.py", line 52, in run
        shutdown_event = ctx.Event()
      File "/usr/local/lib/python3.8/multiprocessing/context.py", line 93, in Event
        return Event(ctx=self.get_context())
      File "/usr/local/lib/python3.8/multiprocessing/synchronize.py", line 324, in __init__
        self._cond = ctx.Condition(ctx.Lock())
      File "/usr/local/lib/python3.8/multiprocessing/context.py", line 68, in Lock
        return Lock(ctx=self.get_context())
      File "/usr/local/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
        SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
      File "/usr/local/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
        sl = self._semlock = _multiprocessing.SemLock(
    OSError: [Errno 38] Function not implemented
    

    Maybe we can't spawn on these kind of environments? I find a similar issue in AWS Lambda: https://stackoverflow.com/questions/34005930/multiprocessing-semlock-is-not-implemented-when-running-on-aws-lambda

    Any information would be welcome.

    opened by igortg 1
  • Issue when sending large payload (HTTP CODE 400 or 104)

    Issue when sending large payload (HTTP CODE 400 or 104)

    Hi,

    I'm faced up a weird behaviour when sending a large amount of data to hypercorn and flask. Here the steps to reproduce:

    Generate the data file

    dd if=/dev/urandom of=test.output bs=50M count=2

    Start hypercorn

    hypercorn app:app

    Query

    With curl:

    curl -i -H 'Content-Length: 104857600' -X POST --data-binary "@test.output" http://localhost:8000/data

    or a python code:

    import requests requests.post("http://localhost:8000/data", files={"file": open('test.output', 'rb')})

    With curl the output is :

    HTTP/1.1 100 
    date: Fri, 04 Nov 2022 14:03:58 GMT
    server: hypercorn-h11
    
    HTTP/1.1 400 
    date: Fri, 04 Nov 2022 14:03:58 GMT
    server: hypercorn-h11
    Transfer-Encoding: chunked
    

    With python client:

    ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

    The flask code is the following:

    import logging
    from time import strftime
    from flask import Flask, request
    
    app = Flask(__name__)
    logger = logging.getLogger('bench')
    logger.setLevel(logging.ERROR)
    
    
    @app.route('/data', methods=["POST"])
    def send_data():
        """Send large data."""
        data = request.get_data()
        return str(len(data)) + "\n"
    
    
    @app.before_request
    def before_request_func():
        timestamp = strftime('[%Y-%m-%d %T %z]')
        logger.error(f"{timestamp} Request received")
    
    
    @app.after_request
    def after_request(response):
        timestamp = strftime('[%Y-%m-%d %T %z]')
        logger.error('%s - - %s "%s %s" %s', request.remote_addr, timestamp,
                     request.method, request.full_path, response.status)
        return response
    

    Note: the code works with hypercorn + flask with a smallest payload but not with 100MB. Note: the output of the flask logger is empty. Which means that the request doesn't even reach flask (just hypercorn reject it somehow).

    I have tried several combinations of configuration variables: h11_max_incomplete_size, h2_max_concurrent_streams, h2_max_header_list_size, h2_max_inbound_frame_size, max_app_queue_size, backlog, websocket_max_message_size without any success.

    This code works with flask alone or gunicorn+flask.

    Any thoughts? Thank your for your help :+1:

    opened by heavenboy8 2
  • Fix handling of TCP EOF

    Fix handling of TCP EOF

    This patch fixes handling of StreamReader.read for the case when remote client terminates the connection. As documented here: https://docs.python.org/3.4/library/asyncio-stream.html#asyncio.StreamReader.read read method retuns b"" in this case.

    opened by tohin 2
  • Specifying both IPv4 and IPv6 binds

    Specifying both IPv4 and IPv6 binds

        config = hypercorn.config.Config()
        config.bind = ["0.0.0.0:8088"]                 # OK
        config.bind = ["[::]:8088"]                    # OK
        config.bind = ["0.0.0.0:8088", "[::]:8088"]    # OSError: [Errno 98] Address already in use
        asyncio.run(hypercorn.asyncio.serve(endpoints.app, config))
    

    The earlier listens on IPv4 only.

    The variant in the middle actually listens on both IPv4 and IPv6 on my linux machine.

    https://pgjones.gitlab.io/hypercorn/how_to_guides/binds.html seems to suggest the latter, but hypercorn fails with this hairy traceback:

    [2022-10-11 10:03:29 +0900] [39335] [INFO] Running on http://0.0.0.0:8088 (CTRL + C to quit)
    unhandled exception during asyncio.run() shutdown
    task: <Task finished name='Task-2' coro=<Lifespan.handle_lifespan() done, defined at venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py:31> exception=LifespanFailureError('Lifespan failure in shutdown. \'Traceback (most recent call last):\n  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run\n    return loop.run_until_complete(main)\n  File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete\n    return future.result()\n  File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/__init__.py", line 49, in serve\n    await worker_serve(\n  File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/run.py", line 120, in worker_serve\n    await asyncio.start_server(_server_callback, backlog=config.backlog, sock=sock)\n  File "/usr/lib/python3.10/asyncio/streams.py", line 84, in start_server\n    return await loop.create_server(factory, host, port, **kwds)\n  File "/usr/lib/python3.10/asyncio/base_events.py", line 1526, in create_server\n    server._start_serving()\n  File "/usr/lib/python3.10/asyncio/base_events.py", line 318, in _start_serving\n    sock.listen(self._backlog)\nOSError: [Errno 98] Address already in use\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "venv-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 648, in lifespan\n    await receive()\n  File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py", line 92, in asgi_receive\n    return await self.app_queue.get()\n  File "/usr/lib/python3.10/asyncio/queues.py", line 159, in get\n    await getter\nasyncio.exceptions.CancelledError\n\'')>
    Traceback (most recent call last):
      File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
        return future.result()
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/__init__.py", line 49, in serve
        await worker_serve(
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/run.py", line 120, in worker_serve
        await asyncio.start_server(_server_callback, backlog=config.backlog, sock=sock)
      File "/usr/lib/python3.10/asyncio/streams.py", line 84, in start_server
        return await loop.create_server(factory, host, port, **kwds)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 1526, in create_server
        server._start_serving()
      File "/usr/lib/python3.10/asyncio/base_events.py", line 318, in _start_serving
        sock.listen(self._backlog)
    OSError: [Errno 98] Address already in use
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "venv-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 648, in lifespan
        await receive()
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py", line 92, in asgi_receive
        return await self.app_queue.get()
      File "/usr/lib/python3.10/asyncio/queues.py", line 159, in get
        await getter
    asyncio.exceptions.CancelledError
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py", line 43, in handle_lifespan
        await self.app(
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/app_wrappers.py", line 33, in __call__
        await self.app(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
        await super().__call__(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
        await self.middleware_stack(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in __call__
        await self.app(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 51, in __call__
        await self.app(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
        raise e
      File "venv-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
        await self.app(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 669, in __call__
        await self.lifespan(scope, receive, send)
      File "venv-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 652, in lifespan
        await send({"type": "lifespan.shutdown.failed", "message": exc_text})
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py", line 104, in asgi_send
        raise LifespanFailureError("shutdown", message["message"])
    hypercorn.utils.LifespanFailureError: Lifespan failure in shutdown. 'Traceback (most recent call last):
      File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
        return future.result()
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/__init__.py", line 49, in serve
        await worker_serve(
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/run.py", line 120, in worker_serve
        await asyncio.start_server(_server_callback, backlog=config.backlog, sock=sock)
      File "/usr/lib/python3.10/asyncio/streams.py", line 84, in start_server
        return await loop.create_server(factory, host, port, **kwds)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 1526, in create_server
        server._start_serving()
      File "/usr/lib/python3.10/asyncio/base_events.py", line 318, in _start_serving
        sock.listen(self._backlog)
    OSError: [Errno 98] Address already in use
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "venv-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 648, in lifespan
        await receive()
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/lifespan.py", line 92, in asgi_receive
        return await self.app_queue.get()
      File "/usr/lib/python3.10/asyncio/queues.py", line 159, in get
        await getter
    asyncio.exceptions.CancelledError
    '
    Traceback (most recent call last):
      File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
        exec(code, run_globals)
      File "venvuth/__main__.py", line 59, in <module>
        asyncio.run(hypercorn.asyncio.serve(endpoints.app, config))
      File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
        return future.result()
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/__init__.py", line 49, in serve
        await worker_serve(
      File "venv-py3.10/lib/python3.10/site-packages/hypercorn/asyncio/run.py", line 120, in worker_serve
        await asyncio.start_server(_server_callback, backlog=config.backlog, sock=sock)
      File "/usr/lib/python3.10/asyncio/streams.py", line 84, in start_server
        return await loop.create_server(factory, host, port, **kwds)
      File "/usr/lib/python3.10/asyncio/base_events.py", line 1526, in create_server
        server._start_serving()
      File "/usr/lib/python3.10/asyncio/base_events.py", line 318, in _start_serving
        sock.listen(self._backlog)
    OSError: [Errno 98] Address already in use
    

    So, well, I understand why this is happening, but the experience is a bit overwhelming... I feel that issues like this is why I keep seeing binds to 0.0.0.0 from junior developers.

    opened by dimaqq 0
Owner
Phil Jones
Phil Jones
The official GitHub mirror of https://gitlab.com/pycqa/flake8

Flake8 Flake8 is a wrapper around these tools: PyFlakes pycodestyle Ned Batchelder's McCabe script Flake8 runs all the tools by launching the single f

Python Code Quality Authority 2.6k Jan 3, 2023
Read-only mirror of https://gitlab.gnome.org/GNOME/pybliographer

Pybliographer Pybliographer provides a framework for working with bibliographic databases. This software is licensed under the GPLv2. For more informa

GNOME Github Mirror 15 May 7, 2022
Read-only mirror of https://gitlab.gnome.org/GNOME/meld

About Meld Meld is a visual diff and merge tool targeted at developers. Meld helps you compare files, directories, and version controlled projects. It

GNOME Github Mirror 847 Jan 5, 2023
Read-only mirror of https://gitlab.gnome.org/GNOME/ocrfeeder

================================= OCRFeeder - A Complete OCR Suite ================================= OCRFeeder is a complete Optical Character Recogn

GNOME Github Mirror 81 Dec 23, 2022
A python package for your Kali Linux distro that find the fastest mirror and configure your apt to use that mirror

Kali Mirror Finder Using Single Python File A python package for your Kali Linux distro that find the fastest mirror and configure your apt to use tha

MrSingh 6 Dec 12, 2022
Bagas Mirror&Leech Bot is a multipurpose Telegram Bot written in Python for mirroring files on the Internet to our beloved Google Drive. Based on python-aria-mirror-bot

- [ MAYBE UPDATE & ADD MORE MODULE ] Bagas Mirror&Leech Bot Bagas Mirror&Leech Bot is a multipurpose Telegram Bot written in Python for mirroring file

null 4 Nov 23, 2021
Exposè for i3 WM. Fork of https://gitlab.com/d.reis/i3expo to fix crashes and improve features/usability

Overwiew Expo is an simple and straightforward way to get a visual impression of all your current virtual desktops that many compositing window manage

null 137 Nov 3, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
Askbot is a Django/Python Q&A forum. **Contributors README**: https://github.com/ASKBOT/askbot-devel#how-to-contribute. Commercial hosting of Askbot and support are available at https://askbot.com

ATTENTION: master branch is experimental, please read below Askbot - a Django Q&A forum platform This is Askbot project - open source Q&A system, like

ASKBOT 1.5k Dec 28, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 2, 2023
A PyPI mirror client according to PEP 381 http://www.python.org/dev/peps/pep-0381/

This is a PyPI mirror client according to PEP 381 + PEP 503 http://www.python.org/dev/peps/pep-0381/. bandersnatch >=4.0 supports Linux, MacOSX + Wind

Python Packaging Authority 345 Dec 28, 2022
OpenStack Hacking Style Checks. Mirror of code maintained at opendev.org.

Introduction hacking is a set of flake8 plugins that test and enforce the OpenStack StyleGuide Hacking pins its dependencies, as a new release of some

Mirrors of opendev.org/openstack 224 Jan 5, 2023
A Telegram mirror bot which can be deployed using Heroku.

Slam Mirror Bot This is a telegram bot writen in python for mirroring files on the internet to our beloved Google Drive. Getting Google OAuth API cred

Hafitz Setya 1.2k Jan 1, 2023
Mirror of Apache Allura

Apache Allura Allura is an open source implementation of a software "forge", a web site that manages source code repositories, bug reports, discussion

The Apache Software Foundation 106 Dec 21, 2022
Trac is an enhanced wiki and issue tracking system for software development projects (mirror)

About Trac Trac is a minimalistic web-based software project management and bug/issue tracking system. It provides an interface to the Git and Subvers

Edgewall Software 442 Dec 10, 2022
Repository tracking all OpenStack repositories as submodules. Mirror of code maintained at opendev.org.

OpenStack OpenStack is a collection of interoperable components that can be deployed to provide computing, networking and storage resources. Those inf

Mirrors of opendev.org/openstack 4.6k Dec 28, 2022