Documentation: https://www.starlette.io/
Community: https://discuss.encode.io/c/starlette
Starlette
Starlette is a lightweight ASGI framework/toolkit, which is ideal for building high performance asyncio services.
It is production-ready, and gives you the following:
- Seriously impressive performance.
- WebSocket support.
- GraphQL support.
- In-process background tasks.
- Startup and shutdown events.
- Test client built on
requests
. - CORS, GZip, Static Files, Streaming responses.
- Session and Cookie support.
- 100% test coverage.
- 100% type annotated codebase.
- Zero hard dependencies.
Requirements
Python 3.6+
Installation
$ pip3 install starlette
You'll also want to install an ASGI server, such as uvicorn, daphne, or hypercorn.
$ pip3 install uvicorn
Example
example.py:
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def homepage(request):
return JSONResponse({'hello': 'world'})
routes = [
Route("/", endpoint=homepage)
]
app = Starlette(debug=True, routes=routes)
Then run the application using Uvicorn:
$ uvicorn example:app
For a more complete example, see encode/starlette-example.
Dependencies
Starlette does not have any hard dependencies, but the following are optional:
requests
- Required if you want to use theTestClient
.aiofiles
- Required if you want to useFileResponse
orStaticFiles
.jinja2
- Required if you want to useJinja2Templates
.python-multipart
- Required if you want to support form parsing, withrequest.form()
.itsdangerous
- Required forSessionMiddleware
support.pyyaml
- Required forSchemaGenerator
support.graphene
- Required forGraphQLApp
support.
You can install all of these with pip3 install starlette[full]
.
Framework or Toolkit
Starlette is designed to be used either as a complete framework, or as an ASGI toolkit. You can use any of its components independently.
from starlette.responses import PlainTextResponse
async def app(scope, receive, send):
assert scope['type'] == 'http'
response = PlainTextResponse('Hello, world!')
await response(scope, receive, send)
Run the app
application in example.py
:
$ uvicorn example:app
INFO: Started server process [11509]
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
Run uvicorn with --reload
to enable auto-reloading on code changes.
Modularity
The modularity that Starlette is designed on promotes building re-usable components that can be shared between any ASGI framework. This should enable an ecosystem of shared middleware and mountable applications.
The clean API separation also means it's easier to understand each component in isolation.
Performance
Independent TechEmpower benchmarks show Starlette applications running under Uvicorn as one of the fastest Python frameworks available. (*)
For high throughput loads you should:
- Run using gunicorn using the
uvicorn
worker class. - Use one or two workers per-CPU core. (You might need to experiment with this.)
- Disable access logging.
Eg.
gunicorn -w 4 -k uvicorn.workers.UvicornWorker --log-level warning example:app
Several of the ASGI servers also have pure Python implementations available, so you can also run under PyPy
if your application code has parts that are CPU constrained.
Either programatically:
uvicorn.run(..., http='h11', loop='asyncio')
Or using Gunicorn:
gunicorn -k uvicorn.workers.UvicornH11Worker ...
—
Starlette is BSD licensed code. Designed & built in Brighton, England.