A friendly library for parsing HTTP request arguments, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.

Overview

webargs

PyPI version Build status Documentation marshmallow 3 compatible code style: black

Homepage: https://webargs.readthedocs.io/

webargs is a Python library for parsing and validating HTTP request objects, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, Falcon, and aiohttp.

from flask import Flask
from webargs import fields
from webargs.flaskparser import use_args

app = Flask(__name__)


@app.route("/")
@use_args({"name": fields.Str(required=True)}, location="query")
def index(args):
    return "Hello " + args["name"]


if __name__ == "__main__":
    app.run()

# curl http://localhost:5000/\?name\='World'
# Hello World

Install

pip install -U webargs

webargs supports Python >= 3.6.

Documentation

Full documentation is available at https://webargs.readthedocs.io/.

Support webargs

webargs is maintained by a group of volunteers. If you'd like to support the future of the project, please consider contributing to our Open Collective:

Donate to our collective

Professional Support

Professionally-supported webargs is available through the Tidelift Subscription.

Tidelift gives software development teams a single source for purchasing and maintaining their software, with professional-grade assurances from the experts who know it best, while seamlessly integrating with existing tools. [Get professional support]

Get supported marshmallow with Tidelift

Security Contact Information

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure.

Project Links

License

MIT licensed. See the LICENSE file for more details.

Comments
  • Only allow users to specify a single location per parse call & pass full location data to schema.load

    Only allow users to specify a single location per parse call & pass full location data to schema.load

    This is still incomplete, but core and flaskparser tests are now passing. No work is (yet) included on other concrete parsers, but I fixed the base asyncparser to make mypy linting pass.

    Resolves #419, #267, #164, #268 Obsoletes, and therefore closes, #410

    I've updated the changelog with a basic "statement of intent". As I work on this, I will try to maintain it as an accurate record of how this change should look to the outside world. Obviously, narrative docs also need to be updated with relevant info.

    Highlights / greatest hits:

    • Marks this as an unreleased v6.0.0 in the changelog
    • Replace DEFAULT_LOCATIONS with DEFAULT_LOCATION="json"
    • Replace parse_{location} with location_load_{location}
    • Replace locations=<iterable> with location=<str>
    • Remove locations=... from Fields
    • Remove Parser.parse_arg
    • Remove webargs.core.get_value
    • Add webargs.multidictproxy.MultiDictProxy which has get_value-like behavior and proxies various dict-like objects
    • Reverse the decision made in #297 with a comment in tests explaining this change and a bit of similar explanation in the commit message
    • Numerous fixes to tests to be more explicit about which location is being used (since you can't trivially have a single location /echo which loads data from a bunch of locations)
    • As a side-effect of the above changes, certain inputs which were previously ignored are treated as errors (e.g. 1 for a JSON body will now parse and be passed literally to the schema, which will fail because 1 is not a valid input for schema.load). Changes to the tests serve as a somewhat-readable list of behavioral changes.

    To try to keep this from sprawling even more, my intent is to address some items mentioned in #419 either in follow-up PRs or as part of this only if we're happy with this direction (and agree on those items). In particular: the json_or_form location I proposed and nested use_args usage and error collection.

    @sloria, @lafrech, my biggest question is: do you like where this is going? Should I just hammer out all of the other parsers?

    opened by sirosen 23
  • Leave logic of deciding on extra args to schema

    Leave logic of deciding on extra args to schema

    Now that marshmallow 3.x is in beta and comes with configurable behavior on extra args, how about just including all fields in Parser._parse_request() and passing them on to Schema.load()? It would be up to passed Schema to determine what to do with them.

    The effect would be that:

    • With RAISE, extra fields would be denied.
    • With INCLUDE, extra fields would be allowed and passed on.
    • With EXCLUDE, extra fields would be ignored.

    This would allow something like:

    @use_args(HeaderSchema(unknown=EXCLUDE), locations=('headers', ))
    @use_args({'action': fields.Str()}, locations=('json',))  # default is RAISE
    

    Actually, I don't think this even needs marshmallow 3.x. Why not just always pass all fields from _parse_request() to Schema.load()? On 2.x you would need custom @validates_schema implementation in addition, but on 3.x you would not.

    Maybe I'm missing something...

    help wanted backwards incompatible 
    opened by tuukkamustonen 23
  • Logo proposal

    Logo proposal

    Greetings, @sloria

    My apologies in case this is not the right channel to address this. I'm a designer in development and an open source enthusiast, who exploring GitHub found your project and decided to propose a logo design for it. It's (of course) totally free and we would be working together to create the design that fits best. In case you agree, you could share some ideas you may have (colours, shapes, etc) so I have something to start with.

    Kind regards and keep up the great work!

    opened by michaelizergit 20
  • Webargs 6.0.0 has broken Flaskparser @use_kwargs: Never parses query string or formdata

    Webargs 6.0.0 has broken Flaskparser @use_kwargs: Never parses query string or formdata

    Hello,

    Your recent release of 6.0.0 of webargs has unexpectedly broken our Flask apps. It seems now the @use_kwargs decorator from the webargs.flaskparser completely fails to parse GET query string parameters or POST form-data parameters. Only JSON request bodies ever get parsed anymore.

    Here is an example Flask app that shows the problem:

    from flask import Flask, jsonify
    from webargs import fields
    from webargs.flaskparser import use_kwargs
    
    app = Flask(__name__)
    
    @app.route("/test1", methods=["GET", "POST", "PUT"])
    @use_kwargs({
        "page": fields.Int(missing=1),
        "per_page": fields.Int(missing=20),
        "full": fields.Bool(missing=False),
    })
    def test1(**kwargs):
        return jsonify(kwargs)
    
    @app.route("/test2", methods=["GET", "POST", "PUT"])
    @use_kwargs({
        "name": fields.Str(required=True),
    })
    def test2(**kwargs):
        return jsonify(kwargs)
    
    app.run()
    

    The endpoint "/test1" has three optional parameters each with default values when missing. If I make a GET request (with query parameters) or a POST request with application/x-www-form-urlencoded format, webargs completely fails to pick up my parameters and only sees the defaults.

    In the endpoint "/test2" I have a required parameter, which again fails to parse on GET or form-data post but works on JSON post only. Here are some curl examples:

    # GET request doesn't parse any param
    % curl 'http://localhost:5000/test1?page=5&per_page=2&full=1'
    {"full":false,"page":1,"per_page":20}
    
    # Normal POST doesn't parse any param
    % curl -X POST -d page=5 -d per_page=2 -d full=1 'http://localhost:5000/test1'
    {"full":false,"page":1,"per_page":20}
    
    # JSON POST actually does parse params
    % curl -X POST -H 'Content-Type: application/json' -d '{"page": 5, "per_page": 2, "full": true}' 'http://localhost:5000/test1'
    {"full":true,"page":5,"per_page":2}
    
    # GET is "missing required parameter"
    % curl 'http://localhost:5000/test2?name=alice'
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>422 Unprocessable Entity</title>
    <h1>Unprocessable Entity</h1>
    <p>The request was well-formed but was unable to be followed due to semantic errors.</p>
    
    # POST is "missing required parameter"
    % curl -X POST -d name=alice 'http://localhost:5000/test2'
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>422 Unprocessable Entity</title>
    <h1>Unprocessable Entity</h1>
    <p>The request was well-formed but was unable to be followed due to semantic errors.</p>
    
    # POST JSON actually works
    % curl -X POST -H 'Content-Type: application/json' -d '{"name": "alice"}' 'http://localhost:5000/test2'
    {"name":"alice"}
    
    opened by kirsle 17
  • Race conditions for parallel requests due to cache

    Race conditions for parallel requests due to cache

    I just noticed that something in webargs or marshmallow isn't thread-safe. Take this minimal example"

    import time
    
    from flask import Flask, jsonify, request
    from marshmallow.fields import Field
    from webargs.flaskparser import use_kwargs
    
    
    app = Flask(__name__)
    
    
    class MyField(Field):
        def _serialize(self, value, attr, obj):
            return value
    
        def _deserialize(self, value, attr, data):
            print 'deserialize', request.json, value
            time.sleep(0.25)
            return value
    
    
    @app.route('/test', methods=('POST',))
    @use_kwargs({
        'value': MyField(),
    })
    def test(value):
        time.sleep(1)
        return jsonify(webargs_result=value, original_data=request.json['value'])
    

    Run it with threading enabled:

    $ FLASK_APP=webargsrace.py flask run -p 8080 --with-threads
    

    Now send two requests in parallel, with different values:

    $ http post http://127.0.0.1:8080/test 'value=foo' & ; http post http://127.0.0.1:8080/test 'value=bar' &
    

    The output from these two requests is:

    {
        "original_data": "bar",
        "webargs_result": "bar"
    }
    
    {
        "original_data": "foo",
        "webargs_result": "bar"
    }
    

    Clearly not what one would have expected! :bomb:

    The output of the print statement showing the request data and what the field receives confirms the issue:

    deserialize {u'value': u'bar'} bar
    deserialize {u'value': u'foo'} bar
    

    Tested with the latest marshmallow/webargs from PyPI, and also the marshmallow3 rc (marshmallow==3.0.0rc4, webargs==5.1.2).

    bug help wanted 
    opened by ThiefMaster 17
  • Replace Azure Pipelines with GitHub Actions

    Replace Azure Pipelines with GitHub Actions

    • Setup GH Actions build for testing, mypy, and twine check
    • Add check-jsonschema to pre-commit config
    • Remove Azure Pipelines

    I originally tried this with the GitHub reusable workflows feature, but a reusable workflow can't be run under a matrix config.

    Here's a screenshot the actions pane in a build in my fork

    The matrix is slightly bigger than what we had in Azure Pipelines, but we can adjust it more going forward.

    image

    Two more things I'd like us to do on the CI checklist:

    • Turn on ReadTheDocs PR builds (Settings > Advanced > PR builds).
    • We need a pypi token from someone with push access to setup a tag->publish workflow, set as a repo secret. Without this, we can't create a GH Actions publishing job.

    I hesitated to introduce check-jsonschema, as it is my own work, but I find it useful. It catches a wide class of invalid workflow files when tweaking builds. If there's any concern about pulling it in, I'll drop it from the config, as it's not a big deal.

    opened by sirosen 16
  • [Work in progress] Marshmallow 3 port

    [Work in progress] Marshmallow 3 port

    I've initiated a Marshmallow 3 port. Feedback welcome.

    I'm not sure how to let tox/Travis test on both Marshmallow versions.

    Regarding test_error_handler_is_called_regardless_of_schema_strict_setting, see https://github.com/sloria/webargs/issues/187.

    opened by lafrech 14
  • Aiohttp: Parser.parse doesn't work after request body was read.

    Aiohttp: Parser.parse doesn't work after request body was read.

    When a content of a request is read before parser.parse is called (with location=('json',)), parser returns that JSON fields are missing in the request.

    Example code:

    async def handler(request):
        data = await request.json()
        args = await parser.parse(handler_args, req=request, locations=('json',))
        return web.Response()
    

    The issue is the following line of code https://github.com/sloria/webargs/blob/dev/webargs/aiohttpparser.py#L92 specifically req.has_body. This attribute has an unfortunate name and it's not constant over a lifetime of a request. It checks if there are more bytes to read. Once the body is read by await request.json() there are no more bytes to read and value of req.has_body is False.

    opened by ku3o 14
  • Improve documentation on fields.DelimitedList vs fields.List

    Improve documentation on fields.DelimitedList vs fields.List

    The docs for the fields.DelimitedList say it "can load from either a list or a delimited string" but I cannot get it to load from a list. It's possible I'm doing something wrong but When I swap out fields.DelimitedList for fields.List everything works as expected. I think either the docs should be changed, or we should fix the implementation of DelimitedList so that it works as expected.

    help wanted docs 
    opened by sm-moore 13
  • Using webargs to parse more complex query arguments (string or comparison operators,...)

    Using webargs to parse more complex query arguments (string or comparison operators,...)

    I would like my API to offer basic query features, like not only get item by ID but also using operators on item attributes:

    GET /resource/?name=chuck&surname__contains=orris&age__lt=69
    

    Proof of concept

    Here is what I've done so far. I did not modify webargs code. I'm only subclassing in my own application.

    I've been searching around and didn't find a universally accepted norm specifying such a query language. In my implementation, I'm using the double underscore syntax and a subset of operators from MongoEngine.

    Basically, I have two families of operators, some operating on numbers, others on string.

    NUMBER_OPERATORS = ('ne', 'gt', 'gte', 'lt', 'lte')
    
    STRING_OPERATORS = (
        'contains', 'icontains', 'startswith', 'istartswith',
        'endswith', 'iendswith', 'iexact'
    )
    
    QUERY_OPERATORS = {
        'number': NUMBER_OPERATORS,
        'string': STRING_OPERATORS,
    }
    

    Those lists could probably be extended. Operators meanings should be easy to grap. Examples:

    • age__lt=69 means I expect the API to return records with attribute age lower than 69
    • surname__contains=orris means attribute age contains string "orris"

    To let webargs parse such parameters, I need to modify the Schema so that for chosen fields, the Marshmallow field is duplicated (deepcopied) into needed variants. For instance, the field age is duplicated into age__lt, age__gt,...

    This is an opt-in feature. For each field I want to expose this way, I need to specify in Meta which operators category it should use (currently only two categories: number and string). Auto-detection is complicated as I don't know how I would handle custom fields, and there may be other categories some day.

    In the Schema, I add:

        class Meta:
            fields_filters = {
                'name': ('string',),
                'surname': ('string',),
                'age': ('number',),
            }
    

    For each field, I'm passing a list of categories as one could imagine several categories applying to a field, but currently, I have no example of field that would use both number and string operators.

    And the "magic" takes place here:

    class SchemaOpts(ma.SchemaOpts):
        def __init__(self, meta):
            super(SchemaOpts, self).__init__(meta)
            # Add a new meta field to pass the list of filters
            self.fields_filters = getattr(meta, 'fields_filters', None)
    
    
    class SchemaMeta(ma.schema.SchemaMeta):
        """Metaclass for `ModelSchema`."""
    
        @classmethod
        def get_declared_fields(mcs, klass, *args, **kwargs):
    
            # Create empty dict using provided dict_class
            declared_fields = kwargs.get('dict_class', dict)()
    
            # Add base fields
            base_fields = super(SchemaMeta, mcs).get_declared_fields(
                klass, *args, **kwargs
            )
            declared_fields.update(base_fields)
    
            # Get allowed filters from Meta and create filters
            opts = klass.opts
            fields_filters = getattr(opts, 'fields_filters', None)
    
            if fields_filters:
                filter_fields = {}
                for field_name, field_filters in fields_filters.items():
                    field = base_fields.get(field_name, None)
                    if field:
                        for filter_category in field_filters:
                            for operator in QUERY_OPERATORS.get(
                                    filter_category, ()):
                                filter_fields[
                                    '{}__{}'.format(field_name, operator)
                                ] = deepcopy(field)
                declared_fields.update(filter_fields)
    
            return declared_fields
    
    
    class QueryArgsSchema(ma.compat.with_metaclass(SchemaMeta, ma.Schema)):
        OPTIONS_CLASS = SchemaOpts
    

    And finally, I use the Schema to parse the query arguments:

        @use_args(ObjectSchema)
        def get(self, args):
            ...
    

    Questions

    This raises a few points.

    • Is there some sort of convention I missed when searching for a query language?
    • Is this out-of-scope for webargs or could it be a useful enhancement?
    • Is there no need for that? I've been investigating both flask-retful and marshmallow/webargs/... ecosystems, along with @frol's invaluable flask-restplus-server-example and saw nothing close to this, so I'm thinking maybe people just don't do that. Or maybe they only expose a few filters, and they do it in specific routes.
    • Should this be in @touilleMan's marshmallow-mongoengine? I do use this library, but although the query language is inspired^Wshamelessly copied from MongoEngine (which allows me to pass the query arguments straight into the QuerySet filters...), the whole thing has no dependency on MongoEngine and this should be a generic feature.
    • My QueryArgsSchema also has sort and pagination fields, but I didn't expose them here as this needs nothing fancy on Marshmallow's side. Maybe a real "query parameters" feature would integrate these as well.

    Feedback greatly appreciated. It seems to work right now, but on the long run, I might discover it was poorly designed from the start.

    Thanks.

    opened by lafrech 13
  • Inconsistent behavior on Python 2.7 vs. Python 3.7 on Flask

    Inconsistent behavior on Python 2.7 vs. Python 3.7 on Flask

    Hey guys,

    I am having an issue with webargs==5.x, in that, when I upgrade it from 4.x to 5.x, using Flask==1.0.2, it works and behaves normally on python 3.7, but it behaves differently on python 2.7 on the same code base. The issue is on this pull request mostafa/grest#84 and is evident on this build: https://travis-ci.org/mostafa/grest/builds/479199700. I've tried to trace my own code to see if there is a path I am not covering, but that seems not to be the case, because it passes all tests on python 3.x.

    Thanks, Mostafa.

    help wanted 
    opened by mostafa 12
  • Update minimum supported versions of frameworks in next major release

    Update minimum supported versions of frameworks in next major release

    In the next major webargs release (9.0), we should make sure to update the minimum versions of the various frameworks which we test against.

    IIRC, the current bounds came from looking at which versions we support in practice. But we don't want to be stuck testing against old flask, django, pyramid, etc versions indefinitely. We could debate exactly what kinds of updates are safe in a minor release -- especially since this is for our testing configuration and not part of our install_requires -- but I think it's simpler to just do this in 9.0, whenever that may be.

    framework-support 
    opened by sirosen 1
  • [RFC] Revisiting our CI setup: GH Actions, RTD

    [RFC] Revisiting our CI setup: GH Actions, RTD

    In #687 , we started to discuss our more general setup in terms of Azure Pipelines vs GitHub Actions (GH Actions). A couple of us support the switch to GH Actions, and there's no strong objection to it. I'd like to try that out.

    The setup I want to try would be something like a reusable workflow to run tox, in the repo, and then a job named build which runs the tox workflow over a matrix of python version, os, and tox env names. Now that reusable workflows supports using a local file, we can develop something in webargs until we like the result. If we then want to move it to a separate project like marshmallow-code/reusable-github-workflows, we can discuss and/or do that.

    There ~are two notable jobs~ is one job in Azure Pipelines which I don't think belongs in the move to GH Actions:

    • tox -e docs
    • ~pypi publish~ (done in #690)

    For the docs, ReadTheDocs (RTD) offers a relatively new feature to build on PRs; I recall we discussed it when it was still in beta. I like this primarily because it's driven by the .readthedocs.yml in the repo, so we don't run into trouble where our tox -e docs run in CI differs from what happens in RTD. It also means that if we have a PR to update .readthedocs.yml config, it will be tested in the PR build. All we have to do is enable this feature in the readthedocs.org admin pane and make sure that the RTD webhook is set to send Pull Request events. A trial PR could demonstrate that this works -- e.g. by updating the python version used for RTD.

    ~For pypi publishing, I think we can leave this in Azure for the initial move to GitHub Actions. It should be possible to setup a tag->publish reusable workflow, but I want to get more "basic" CI worked out before taking a crack at this.~ (done in #690)


    With #690 now merged, we have a short "remaining" TODO list:

    • [ ] Turn on ReadTheDocs PR builds (Settings > Advanced > PR builds)
    • [x] Remove testpypi releasing once we've tested that it works
    • [ ] evaluate reusable workflows so that we can reduce duplication in config across repos
    opened by sirosen 8
  • Add an 'Examples' page to the docs with fully functioning example applications

    Add an 'Examples' page to the docs with fully functioning example applications

    There are a few cases of interesting usages that come up in issues, but which don't seem appropriate to put into our main docs (b/c we don't want to bloat our docs). If we had an Examples page in the docs, we could show off all of our cool ideas for how you can solve problems with webargs, but without disrupting the flow of the main documentation.

    Specific cases which would be good candidates for example apps:

    I did a quick scan of closed issues until I hit the 5.x/6.0 transition point, then called it a day. Maybe there are other interesting examples in older issues? Or maybe we have things in other parts of the docs like the upgrade guide which would be nice to separate out (or even just duplicate).

    docs 
    opened by sirosen 5
  • Are key values collected from headers case insensitive?

    Are key values collected from headers case insensitive?

    The documentation on passed values (https://flask-smorest.readthedocs.io/en/latest/arguments.html) for form values, json, headers, query, etc. do not have case sensitivity documented. I would expect locations like json would be case sensitive, but locations like headers to be case insensitive. Looking at https://github.com/marshmallow-code/flask-smorest/blob/e06325d7a201af6d07caa2e8e103fba15eb4b9b7/flask_smorest/arguments.py#L105 I'd guess everything is case sensitive.

    question 
    opened by Josh-Marshall-FactSet 15
  • Passing array in query via axios

    Passing array in query via axios

    Hi,

    I'm using axios to send a GET request to my flask-smorest endpoint. I know there's no real agreement on this spec-wise, but the request's querystring contains an array, and axios sends it in this format:

    http://127.0.0.1:5000/api/v1/items/?types[]=cd&types[]=dvd
    

    I've defined the schema used for reading the arguments as

    class FilterSchema(ma.Schema):
        ...
        types = ma.fields.List(ma.fields.String(), missing=[])
    

    Yet when I try to read the data received in my endpoint, types is empty:

    @items_blp.route('/')
    class ItemCollection(MethodView):
        @items_blp.arguments(FilterSchema(unknown=ma.EXCLUDE), location="query")
        @listings_blp.response(ItemSchema(many=True))
        def get(self, payload):
            # payload['types'] is empty...
            if payload['types']:
                qs = Item.objects.filter(item_type__in=payload['types'])
            return qs
    

    Is this a missing feature in flask-smorest or should I ask on SO whether I should use another way to pass data?

    question 
    opened by LaundroMat 6
  • RFC: Use annotations for parsing arguments

    RFC: Use annotations for parsing arguments

    Add a decorator that will parse request arguments from the view function's type annotations.

    I've built a working proof of concept of this idea in webargs-starlette.

    @app.route("/")
    @use_annotations(locations=("query",))
    async def index(request, name: str = "World"):
        return JSONResponse({"Hello": name})
    

    A marshmallow Schema is generated from the annotations using Schema.TYPE_MAPPING to construct fields.

    The code for use_annotations mostly isn't tied to Starlette and could be adapted for AsyncParser (and core.Parser when we drop Python 2 support).

    feedback welcome 
    opened by sloria 4
Owner
marshmallow-code
Python object serialization and deserialization, lightweight and fluffy
marshmallow-code
A simple URL shortener built with Flask

A simple URL shortener built with Flask and MongoDB.

Mike Lowe 2 Feb 5, 2022
🌐 URL parsing and manipulation made easy.

furl is a small Python library that makes parsing and manipulating URLs easy. Python's standard urllib and urlparse modules provide a number of URL re

Ansgar Grunseid 2.4k Jan 4, 2023
🔗 FusiShort is a URL shortener built with Python, Redis, Docker and Kubernetes

This is a playground application created with goal of applying full cycle software development using popular technologies like Python, Redis, Docker and Kubernetes.

Lucas Fusinato Zanis 7 Nov 10, 2022
python3 flask based python-url-shortener microservice.

python-url-shortener This repository is for managing all public/private entity specific api endpoints for an organisation. In this case we have entity

Asutosh Parida 1 Oct 18, 2021
A url shortner written in Flask.

url-shortener-elitmus This is a simple flask app which takes an URL and shortens it. This shortened verion of the URL redirects to the user to the lon

null 2 Nov 23, 2021
Simple Version of ouo.io. shorten any link on the web easily

OUO.IO LINK SHORTENER This is a simple python script that made to short links. currently ouo.io doesn't have Application Programming Interface so i de

Danushka-Madushan 1 Dec 11, 2021
C++ library for urlencode.

liburlencode C library for urlencode.

Khaidi Chu 6 Oct 31, 2022
A simple, immutable URL class with a clean API for interrogation and manipulation.

purl - A simple Python URL class A simple, immutable URL class with a clean API for interrogation and manipulation. Supports Pythons 2.7, 3.3, 3.4, 3.

David Winterbottom 286 Jan 2, 2023
Astra is a tool to find URLs and secrets.

Astra finds urls, endpoints, aws buckets, api keys, tokens, etc from a given url/s. It combines the paths and endpoints with the given domain and give

Stinger 198 Dec 27, 2022
This is a no-bullshit file hosting and URL shortening service that also runs 0x0.st. Use with uWSGI.

This is a no-bullshit file hosting and URL shortening service that also runs 0x0.st. Use with uWSGI.

mia 1.6k Dec 31, 2022
Fast pattern fetcher, Takes a URLs list and outputs the URLs which contains the parameters according to the specified pattern.

Fast Pattern Fetcher (fpf) Coded with <3 by HS Devansh Raghav Fast Pattern Fetcher, Takes a URLs list and outputs the URLs which contains the paramete

whoami security 5 Feb 20, 2022
Customizable URL shortener written in Python3 for sniffing and spoofing

Customizable URL shortener written in Python3 for sniffing and spoofing

null 3 Nov 22, 2022
A simple URL shortener app using Python AWS Chalice, AWS Lambda and AWS Dynamodb.

url-shortener-chalice A simple URL shortener app using AWS Chalice. Please make sure you configure your AWS credentials using AWS CLI before starting

Ranadeep Ghosh 2 Dec 9, 2022
Ukiyo - A simple, minimalist and efficient discord vanity URL sniper

Ukiyo - a simple, minimalist and efficient discord vanity URL sniper. Ukiyo is easy to use, has a very visually pleasing interface, and has great spee

null 13 Apr 14, 2022
EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, parallel or even single requests

EasyRequests EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, paralle

Avi 1 Jan 27, 2022
aiohttp-ratelimiter is a rate limiter for the aiohttp.web framework.

aiohttp-ratelimiter aiohttp-ratelimiter is a rate limiter for the aiohttp.web fr

JGL Technologies 4 Dec 11, 2022
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wójcik 300 Dec 26, 2022
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wójcik 300 Dec 26, 2022
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wójcik 110 Feb 16, 2021
tartiflette-aiohttp is a wrapper of aiohttp which includes the Tartiflette GraphQL Engine, do not hesitate to take a look of the Tartiflette project.

tartiflette-aiohttp is a wrapper of aiohttp which includes the Tartiflette GraphQL Engine. You can take a look at the Tartiflette API documentation. U

tartiflette 60 Nov 8, 2022