FastWSGI - An ultra fast WSGI server for Python 3

Overview


Tests Pypi Language grade: C/C++ Language grade: Python

FastWSGI

🚧 FastWSGI is still under development.

FastWSGI is an ultra fast WSGI server for Python 3.

Its written in C and uses libuv and llhttp under the hood for blazing fast performance.

Supported Platforms

Platform Linux MacOs Windows
Support ✅ ✅ ✅

Performance

FastWSGI is one of the fastest general use WSGI servers out there!

For a comparison against other popular WSGI servers, see PERFORMANCE.md

Installation

Install using the pip package manager.

pip install fastwsgi

Quick start

Create a new file example.py with the following:

import fastwsgi

def app(environ, start_response):
    headers = [('Content-Type', 'text/plain')]
    start_response('200 OK', headers)
    return [b'Hello, World!']

if __name__ == '__main__':
    fastwsgi.run(wsgi_app=app, host='0.0.0.0', port=5000)

Run the server using:

python3 example.py

Or, by using the fastwsgi command:

fastwsgi example:app

Example usage with Flask

See example.py for more details.

import fastwsgi
from flask import Flask

app = Flask(__name__)


@app.get('/')
def hello_world():
    return 'Hello, World!', 200


if __name__ == '__main__':
    fastwsgi.run(wsgi_app=app, host='127.0.0.1', port=5000)

Testing

To run the test suite using pytest, run the following command:

python3 -m pytest

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests where appropriate.

TODO

  • Comprehensive error handling
  • Complete HTTP/1.1 compliance
  • Unit tests running in CI workflow
You might also like...
splinter - python test framework for web applications

splinter - python tool for testing web applications splinter is an open source tool for testing web applications using Python. It lets you automate br

Let your Python tests travel through time

FreezeGun: Let your Python tests travel through time FreezeGun is a library that allows your Python tests to travel through time by mocking the dateti

HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

A utility for mocking out the Python Requests library.

Responses A utility library for mocking out the requests Python library. Note Responses requires Python 2.7 or newer, and requests = 2.0 Installing p

A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

Mixer -- Is a fixtures replacement. Supported Django, Flask, SqlAlchemy and custom python objects.

The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-

Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes in a variety of languages.
Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes in a variety of languages.

Mimesis - Fake Data Generator Description Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes

Coroutine-based concurrency library for Python

gevent Read the documentation online at http://www.gevent.org. Post issues on the bug tracker, discuss and ask open ended questions on the mailing lis

Comments
  • In some cases, FastWSGI+Flask is faster than NGINX (latency test)

    In some cases, FastWSGI+Flask is faster than NGINX (latency test)

    Client : i7-980X @ 2.8GHz, Debian 12, Python 3.10, NIC Intel x550-t2 10Gbps Server: i7-980X @ 2.8GHz, Windows 7, Python 3.8, NIC Intel x550-t2 10Gbps

    Payload for testing: https://github.com/MiloszKrajewski/SilesiaCorpus/blob/master/xml.zip (651 KiB)

    Server test app: https://gist.github.com/remittor/1f2bc834852009631d437cd96822afa4

    FastWSGI + Flask

    python.exe server.py -h 172.16.220.205 -g fw -f xml.zip -b

    $ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
    Running 30s test @ http://172.16.220.205:5000/
      1 threads and 1 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.68ms  128.35us   9.26ms   97.39%
        Req/Sec   597.20      7.27   616.00     81.00%
      Latency Distribution
         50%    1.67ms
         75%    1.71ms
         90%    1.75ms
         99%    1.84ms
      17837 requests in 30.01s, 11.08GB read
    Requests/sec:    594.42
    Transfer/sec:    378.23MB 
    

    nginx.exe

    $ wrk -t1 -c1 -d30 http://172.16.220.205:80/xml.zip --latency
    Running 30s test @ http://172.16.220.205:80/xml.zip
      1 threads and 1 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     2.00ms  176.83us   4.04ms   69.41%
        Req/Sec   500.61     13.14   555.00     71.76%
      Latency Distribution
         50%    2.01ms
         75%    2.12ms
         90%    2.22ms
         99%    2.39ms
      14999 requests in 30.10s, 9.32GB read
    Requests/sec:    498.31
    Transfer/sec:    317.14MB 
    

    Werkzeug + Flask

    python.exe server.py -h 172.16.220.205 -g wz -f xml.zip -b

    $ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
    Running 30s test @ http://172.16.220.205:5000/
      1 threads and 1 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     3.46ms  523.74us   9.90ms   62.01%
        Req/Sec   274.58     26.55   343.00     76.00%
      Latency Distribution
         50%    3.62ms
         75%    3.87ms
         90%    4.03ms
         99%    4.30ms
      8204 requests in 30.00s, 5.10GB read
    Requests/sec:    273.46
    Transfer/sec:    174.04MB 
    

    Waitress + Flask

    python.exe server.py -h 172.16.220.205 -g wr -f xml.zip -b

    $ wrk -t1 -c1 -d30 http://172.16.220.205:5000/ --latency
    Running 30s test @ http://172.16.220.205:5000/
      1 threads and 1 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    10.01ms  605.99us  11.31ms   67.94%
        Req/Sec   100.24      4.73   111.00     78.74%
      Latency Distribution
         50%   10.11ms
         75%   10.48ms
         90%   10.71ms
         99%   11.04ms
      3004 requests in 30.10s, 1.87GB read
    Requests/sec:     99.80
    Transfer/sec:     63.51MB 
    
    opened by remittor 0
  • Windows exception 0xC0000005

    Windows exception 0xC0000005

    import fastwsgi
    import app
    import logging
    host, port, debug, ssl_context = app.config_prepare()
    
    if __name__ == '__main__':
        host, port, debug, ssl_context = app.config_prepare()
        fastwsgi.run(wsgi_app=app.application, host=host, port=port)
    

    Error:

    ==== FastWSGI ==== 
    Host: 0.0.0.0
    Port: 5000
    ==================
    
    Server listening at http://0.0.0.0:5000
    
    Process finished with exit code -1073741819 (0xC0000005)
    

    Python version: Python 3.9.10 OS version: Windows 10 10.0.19042 19042

    Installed with:

    pip install fastwsgi==0.0.5
    
    opened by drakylar 3
  • Gunicorn Worker similar to Meinheld

    Gunicorn Worker similar to Meinheld

    Is there a way to create a Gunicorn worker similar to what meinheld has done? https://github.com/mopemope/meinheld/blob/master/meinheld/gmeinheld.py#L11

    It can be used as: gunicorn --workers=2 --worker-class="egg:meinheld#gunicorn_worker" gunicorn_test:app

    • Falcon + Meinheld benchmarks
    Running 1m test @ http://localhost:5000
      8 threads and 100 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.66ms  103.47us   8.59ms   79.39%
        Req/Sec     7.26k   600.67    42.37k    95.17%
      Latency Distribution
         50%    1.68ms
         75%    1.73ms
         90%    1.75ms
         99%    1.84ms
      3468906 requests in 1.00m, 588.86MB read
    Requests/sec:  57719.43
    Transfer/sec:      9.80MB
    
    • Falcon + fastwsgi bechmarks
    Running 1m test @ http://localhost:5000
      8 threads and 100 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.41ms   95.10us   3.54ms   67.27%
        Req/Sec     8.57k   532.54    15.80k    66.10%
      Latency Distribution
         50%    1.46ms
         75%    1.48ms
         90%    1.49ms
         99%    1.58ms
      4093388 requests in 1.00m, 456.74MB read
    Requests/sec:  68187.13
    Transfer/sec:      7.61MB
    

    Having a gunicorn worker for fastwsgi might help people test it out in their own production workload easily.

    Note: Seeing 18% improvement over meinheld albeit on hello world benchmarks for cythonized falcon.

    enhancement 
    opened by viig99 1
  • A bunch of bugs

    A bunch of bugs

    Hi @jamesroberts!

    I've run fastwsgi against some of the bjoern test cases that have accumulated over the years. From a very quick check, here are my results:

    • tests/empty.py segfault
    • tests/env.py segfault
    • tests/headers.py memory leak
    • tests/huge.py hangs forever
    • tests/hello.py hangs forever
    • tests/keep-alive-behaviour.py segfault
    • tests/not-callable.py segfault
    • tests/test_exc_info_reference.py memory leak

    I've used this file to substitute the bjoern module in the tests:

    # bjoern.py
    from fastwsgi import run
    
    bug 
    opened by jonashaag 1
Owner
James Roberts
Software Engineer
James Roberts
Waitress - A WSGI server for Python 2 and 3

Waitress Waitress is a production-quality pure-Python WSGI server with very acceptable performance. It has no dependencies except ones which live in t

Pylons Project 1.2k Dec 30, 2022
Meinheld is a high performance asynchronous WSGI Web Server (based on picoev)

What's this This is a high performance python wsgi web server. And Meinheld is a WSGI compliant web server. (PEP333 and PEP3333 supported) You can als

Yutaka Matsubara 1.4k Jan 1, 2023
The lightning-fast ASGI server. 🦄

The lightning-fast ASGI server. Documentation: https://www.uvicorn.org Community: https://discuss.encode.io/c/uvicorn Requirements: Python 3.6+ (For P

Encode 6k Jan 3, 2023
livereload server in python (MAINTAINERS NEEDED)

LiveReload Reload webpages on changes, without hitting refresh in your browser. Installation python-livereload is for web developers who know Python,

Hsiaoming Yang 977 Dec 14, 2022
Python HTTP Server

Python HTTP Server Preview Languange and Code Editor: How to run? Download the zip first. Open the http.py and wait 1-2 seconds. You will see __pycach

SonLyte 16 Oct 21, 2021
Robyn is an async Python backend server with a runtime written in Rust, btw.

Robyn is an async Python backend server with a runtime written in Rust, btw. Python server running on top of of Rust Async RunTime. Installation

Sanskar Jethi 1.8k Dec 30, 2022
Green is a clean, colorful, fast python test runner.

Green -- A clean, colorful, fast python test runner. Features Clean - Low redundancy in output. Result statistics for each test is vertically aligned.

Nathan Stocks 756 Dec 22, 2022
An HTTP server to easily download and upload files.

httpsweet An HTTP server to easily download and upload files. It was created with flexibility in mind, allowing be used in many different situations,

Eloy 17 Dec 23, 2022
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 8, 2023
A cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.

PyAutoGUI PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard. pip inst

Al Sweigart 7.6k Jan 1, 2023