Python Serverless Microframework for AWS

Overview

AWS Chalice

Gitter Documentation Status Chalice Logo

Chalice is a framework for writing serverless apps in python. It allows you to quickly create and deploy applications that use AWS Lambda. It provides:

  • A command line tool for creating, deploying, and managing your app
  • A decorator based API for integrating with Amazon API Gateway, Amazon S3, Amazon SNS, Amazon SQS, and other AWS services.
  • Automatic IAM policy generation

You can create Rest APIs:

from chalice import Chalice

app = Chalice(app_name="helloworld")

@app.route("/")
def index():
    return {"hello": "world"}

Tasks that run on a periodic basis:

from chalice import Chalice, Rate

app = Chalice(app_name="helloworld")

# Automatically runs every 5 minutes
@app.schedule(Rate(5, unit=Rate.MINUTES))
def periodic_task(event):
    return {"hello": "world"}

You can connect a lambda function to an S3 event:

from chalice import Chalice

app = Chalice(app_name="helloworld")

# Whenever an object is uploaded to 'mybucket'
# this lambda function will be invoked.

@app.on_s3_event(bucket='mybucket')
def handler(event):
    print("Object uploaded for bucket: %s, key: %s"
          % (event.bucket, event.key))

As well as an SQS queue:

from chalice import Chalice

app = Chalice(app_name="helloworld")

# Invoke this lambda function whenever a message
# is sent to the ``my-queue-name`` SQS queue.

@app.on_sqs_message(queue='my-queue-name')
def handler(event):
    for record in event:
        print("Message body: %s" % record.body)

And several other AWS resources.

Once you've written your code, you just run chalice deploy and Chalice takes care of deploying your app.

$ chalice deploy
...
https://endpoint/dev

$ curl https://endpoint/api
{"hello": "world"}

Up and running in less than 30 seconds. Give this project a try and share your feedback with us here on Github.

The documentation is available here.

Quickstart

In this tutorial, you'll use the chalice command line utility to create and deploy a basic REST API. This quickstart uses Python 3.7, but AWS Chalice supports all versions of python supported by AWS Lambda, which includes python2.7, python3.6, python3.7, python3.8. We recommend you use a version of Python 3. You can find the latest versions of python on the Python download page.

To install Chalice, we'll first create and activate a virtual environment in python3.7:

$ python3 --version
Python 3.7.3
$ python3 -m venv venv37
$ . venv37/bin/activate

Next we'll install Chalice using pip:

$ python3 -m pip install chalice

You can verify you have chalice installed by running:

$ chalice --help
Usage: chalice [OPTIONS] COMMAND [ARGS]...
...

Credentials

Before you can deploy an application, be sure you have credentials configured. If you have previously configured your machine to run boto3 (the AWS SDK for Python) or the AWS CLI then you can skip this section.

If this is your first time configuring credentials for AWS you can follow these steps to quickly get started:

$ mkdir ~/.aws
$ cat >> ~/.aws/config
[default]
aws_access_key_id=YOUR_ACCESS_KEY_HERE
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
region=YOUR_REGION (such as us-west-2, us-west-1, etc)

If you want more information on all the supported methods for configuring credentials, see the boto3 docs.

Creating Your Project

The next thing we'll do is use the chalice command to create a new project:

$ chalice new-project helloworld

This will create a helloworld directory. Cd into this directory. You'll see several files have been created for you:

$ cd helloworld
$ ls -la
drwxr-xr-x   .chalice
-rw-r--r--   app.py
-rw-r--r--   requirements.txt

You can ignore the .chalice directory for now, the two main files we'll focus on is app.py and requirements.txt.

Let's take a look at the app.py file:

from chalice import Chalice

app = Chalice(app_name='helloworld')


@app.route('/')
def index():
    return {'hello': 'world'}

The new-project command created a sample app that defines a single view, /, that when called will return the JSON body {"hello": "world"}.

Deploying

Let's deploy this app. Make sure you're in the helloworld directory and run chalice deploy:

$ chalice deploy
Creating deployment package.
Creating IAM role: helloworld-dev
Creating lambda function: helloworld-dev
Creating Rest API
Resources deployed:
  - Lambda ARN: arn:aws:lambda:us-west-2:12345:function:helloworld-dev
  - Rest API URL: https://abcd.execute-api.us-west-2.amazonaws.com/api/

You now have an API up and running using API Gateway and Lambda:

$ curl https://qxea58oupc.execute-api.us-west-2.amazonaws.com/api/
{"hello": "world"}

Try making a change to the returned dictionary from the index() function. You can then redeploy your changes by running chalice deploy.

Next Steps

You've now created your first app using chalice. You can make modifications to your app.py file and rerun chalice deploy to redeploy your changes.

At this point, there are several next steps you can take.

  • Tutorials - Choose from among several guided tutorials that will give you step-by-step examples of various features of Chalice.
  • Topics - Deep dive into documentation on specific areas of Chalice. This contains more detailed documentation than the tutorials.
  • API Reference - Low level reference documentation on all the classes and methods that are part of the public API of Chalice.

If you're done experimenting with Chalice and you'd like to cleanup, you can use the chalice delete command, and Chalice will delete all the resources it created when running the chalice deploy command.

$ chalice delete
Deleting Rest API: abcd4kwyl4
Deleting function aws:arn:lambda:region:123456789:helloworld-dev
Deleting IAM Role helloworld-dev

Feedback

We'd also love to hear from you. Please create any Github issues for additional features you'd like to see over at https://github.com/aws/chalice/issues. You can also chat with us on gitter: https://gitter.im/awslabs/chalice

Comments
  • Addition of implicit partition support

    Addition of implicit partition support

    Issue #, if available: #792

    Description of changes:

    This PR contains the code changes necessary for implicitly supporting partitions as requested by issue #792 and the details of its comments.

    • Utilities
      • URL suffix lookup from the botocore EndpointResolver based on service and region (also from an existing ARN)
    • Deployment/Planning
      • Modifications to the intrinsic function parse_arn to return the dns_suffix for the ARN.
      • A new intrinsic function interrogate_profile has been added where parse_arn is unable to be utilized in the deployment plan.
    • Packaging
      • Cloud Formation
        • AWS::Partition and AWS::URLSuffix have been used to fill in all the necessary characteristics for the resources that would need to be modified.
      • Terraform
        • The partition and dns_suffix attributes from data "aws_partition" "chalice" {} have been used in the templates to populate the necessary policies and resource definitions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by andrew-mcgrath 27
  • sample for writing tests for chalice based app

    sample for writing tests for chalice based app

    Hello,

    could you please provide a best practice sample on how to best write tests for a chalice based app? I looked through the tests folder, but I think most of the tests there are not necessary on my own code. I just want to write minimal code to test my own endpoints.

    Ideally including how the tests can be run on AWS Codebuild / AWS Codepipeline before deployment with CloudFormation.

    A short paragraph in the readme file would be great.

    Best regards, Dieter

    feature-request 
    opened by dimoMacke 27
  • terraform support

    terraform support

    Issue: closes #1121

    This adds in terraform support for chalice package. There's a few goals underlying.. one was to provide provisioning flexibility and extensibility instead of having to layer in multiple provisioning executions for an app (ie reference an s3 bucket, or use chalice lambda function for a step function, etc). Another was to enable chalice use more naturally in orgs that mandate/prefer terraform for infrastructure provisioning.

    I've done a few manual e2e tests on a sample app (api gw, 5 lambdas, scheduled func, managed role). The end result is actually a bit faster than chalice's direct api calls for deploy/destroy.

    https://gist.github.com/kapilt/1794c092cd2f17e0faadb3e00a44bc33

    opened by kapilt 24
  • Authorizer won't work on after deployment.

    Authorizer won't work on after deployment.

    After each successful deployment, I am facing some problems with Authorizer.

    So whenever i ran the following command:

    chalice deploy --stage development
    

    My endpoint response is following with incorrect Headers and Payload:

    HTTP/1.1 500 Internal Server Error
    Connection: keep-alive
    Content-Length: 16
    Content-Type: application/json
    Date: Tue, 09 Jan 2018 05:27:23 GMT
    x-amzn-ErrorType: AuthorizerConfigurationException
    x-amzn-RequestId: c549c9dc-f4fd-11e7-9066-2d91a1bf8bf4
    
    {
        "message": null
    }
    

    My endpoint definition is as following:

    @app.route('/dataset', methods=['POST'], authorizer=auth0, cors=True)
      ...
    

    The Solution is to login into console and go to API Gateway > Authorizer > Edit and without any changes click on save. It will ask to Grant permission to invoke the arn click on Grant and re-deploy the API to it's stage.

    step1 step2

    After re-deploying same request started working and i am getting correct headers as well:

    HTTP/1.1 200 OK
    Access-Control-Allow-Headers: Authorization,Content-Type,X-Amz-Date,X-Amz-Security-Token,X-Api-Key
    Access-Control-Allow-Origin: *
    Connection: keep-alive
    Content-Length: 3278
    Content-Type: application/json
    Date: Tue, 09 Jan 2018 05:37:32 GMT
    X-Amzn-Trace-Id: sampled=0;root=1-5a54551b-f344e6467c4fb4d33d05a081
    x-amzn-RequestId: 2ef0086e-f4ff-11e7-9473-3fe4e73405a2
    
    []
    
    investigating 
    opened by abdullah353 24
  • [proposal] Add support for Kinesis and DynamoDB stream events

    [proposal] Add support for Kinesis and DynamoDB stream events

    Similar to the S3, SNS and SQS event sources implemented earlier, it would be awesome (especially for my current use case) to also support DynamoDB streams - and while we're at it, supporting Kinesis streams should also be easy.

    Fortunately, the implementation in #886 (proposed in #884) should be relatively easy to adapt to support these other sources, since the Lambda event source mapping function is the same.

    Proposal

    Public API

    The DynamoDB streams and Kinesis streams would be implemented nearly identically to the existing event sources - a parameterized decorator.

    @app.on_dynamodb_stream_event(table='mytable', batch_size=30, starting_position='TRIM_HORIZON')
    def handler(event):
        for record in event:
            print(record.body)
    
    @app.on_kinesis_stream_event(stream='mystream', batch_size=100, starting_position='LATEST')
    def handler(event):
        for record in event:
            print(record.body)
    

    Backend

    While we could maintain the existing SQS event source as-is, because a significant amount of functionality would be shared between the three stream sources (SQS, DynamoDB and Kinesis), it may make more sense to rename the backend "sqs_event_source" methods as "stream_event_source" methods, and then map each of the three stream decorators to the same core event source with different configs. Then you could build up the event source arn using the source as another parameter.

    proposals 
    opened by kgutwin 23
  • Multiple Lambdas in 1 Chalice project

    Multiple Lambdas in 1 Chalice project

    Could we choose if we want to deploy 1 big function (current way) or to deploy separate functions for each routing definition? This could save resources and improve execution times for more robust projects.

    feature-request out-of-scope 
    opened by honzous 23
  • Hey, I just met you, and this is crazy, but lets put routes in a module... Blueprints maybe?

    Hey, I just met you, and this is crazy, but lets put routes in a module... Blueprints maybe?

    Use case

    I have been working on a chalice based app lately that is growing a bit beyond having all routes set up in app.py.

    Being heavily influenced by Flask's API, I figured Blueprints might be a welcome addition for Chalice.

    Status of this PR

    This is mostly intended as a provisional or "Request for Comment" PR. If the maintainers of Chalice like this approach, I'd be happy to continue work on adding tests and documentation for this :)

    Currently only very basic functionality exists:

    • Construct a chalice.blueprint.Blueprint object
    • Add routes to your chalice.blueprint.Blueprint instance with the route decorator, which should accept all the same Keyword Arguments as a vanilla Chalice.route... though i'm not sure if they'll all work :)
    • Support for current_request access in Blueprint routes
    • Support for url_for in Blueprint routes.

    Example!

    /app.py

    from chalice import Chalice
    
    from chalicelib.blueprint import foo
    
    app = Chalice(app_name="helloworld")
    app.debug = True
    
    @app.route("/")
    def index():
        return {"hello": "world"}
    
    app.register_blueprint(foo, url_prefix='/foo/')
    

    /chalicelib/blueprint.py

    from chalice.blueprint import Blueprint
    
    foo = Blueprint('foo')
    
    @foo.route('/bar', methods=["GET", "POST"])
    def bar():
        request = foo.current_request
        return {"result": "bar!", "method": request.method}
    
    @foo.route('/baz')
    def baz():
        return {"result": foo.url_for('bar')}
    

    Demo!

    blueprintsmaybe ewdurbin$ chalice local > /dev/null 2>&1 &
    [1] 10084
    blueprintsmaybe ewdurbin$ curl -w "\n" http://localhost:8000/
    {"hello": "world"}
    blueprintsmaybe ewdurbin$ curl -w "\n" http://localhost:8000/foo/bar
    {"result": "bar!", "method": "GET"}
    blueprintsmaybe ewdurbin$ curl -w "\n" -XPOST http://localhost:8000/foo/bar
    {"result": "bar!", "method": "POST"}
    blueprintsmaybe ewdurbin$ curl -w "\n" http://localhost:8000/foo/baz
    {"result": "/foo/bar"}
    
    opened by ewdurbin 18
  • Set arbitrary headers for APIG (will enable CORS)

    Set arbitrary headers for APIG (will enable CORS)

    Currently I can define a resource like this:

    @app.route('/scalars', methods=['GET', 'OPTIONS']) def scalars(): return { 'mau': 27048, 'wau: 7003 }'

    The OPTIONS will help me with enabling CORS in APIG But I'm still missing the 'Access-Control-Allow-Origin' header, so I enable it manually in the console after each deploy.

    One approach would be to configure headers in the method. Another would be to call the "Enable CORS" magic button in APIG.

    WDYT?

    feature-request accepted 
    opened by amir-mehler 18
  • Vpc support

    Vpc support

    new PR for https://github.com/aws/chalice/pull/457 since I can't write to that repo anymore. Closes issue https://github.com/aws/chalice/issues/413

    The PR adds two new fields that can be configured so as to allow support for VPCs:

    • subnet_ids
    • security_group_ids
    opened by lime-green 16
  • chalice deployment package missing libraries

    chalice deployment package missing libraries

    When my Bitbucket pipeline runs chalice deploy --stage dev, the deployment works just fine. When I run chalice deploy --stage dev from my command line, the package deployed to AWS is missing most libraries... so I get errors like Unable to import module 'app': No module named 'phpserialize' when the Lambda is executed.

    I've verified that the deployment packages differ by running deployments, downloading the deployment packages from AWS (using the Lambda console), and diff'ing the contents of the .zip files.

    Clearly there's some difference between my personal environment and the Bitbucket pipeline environment that causes a difference in how the deployment .zip is built, but I can't figure it out. Here's what I've tried:

    • Changing the commands to/from chalice deploy && chalice deploy --stage dev. No affect.
    • Rebuilding my vendor dependencies: In the past I've been able to 'fix' deployments missing modules by manually rebuilding the wheels in the vendor directory, but this hasn't worked in the past few days: pip download x && pip wheel x-y.y.y.tar.gz && rm *.gz
    • python/pip versions: Bitbucket was using python 3.6.3 & pip 9.0.1 and my personal machine was python 3.6.5 & pip 9.0.3. I changed my versions 3.6.3/9.0.1 and my deployment packages were still missing libraries. I changed the Bitbucket pipeline to 3.6.5/10.0.1 and the deployment was still perfect.
    • I've also reviewed issues: #106, #155, #189

    Anyone else seen this problem? It's plaguing my office here and some of us have turned to pushing all changes to Bitbucket because we can't trust a local chalice deploy to work.

    Bitbucket OS: Linux 44faaabc-f42c-4803-993e-f72fa3365e1e 4.14.48-coreos-r2 #1 SMP Thu Jun 14 08:23:03 UTC 2018 x86_64 GNU/Linux Laptop OS: Darwin Macbook.localdomain 17.6.0 Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64

    Here's my requirements.txt file:

    chalice==1.3.0
    phpserialize==1.3
    yoyo-migrations==5.1.5
    

    Here're the contents of my vendor directory:

    MarkupSafe-1.0-cp36-cp36m-macosx_10_13_x86_64.whl	idna-2.7-py2.py3-none-any.whl
    PyYAML-3.13-cp36-cp36m-macosx_10_13_x86_64.whl		iniherit-0.3.9-py3-none-any.whl
    argh-0.26.2-py2.py3-none-any.whl			jmespath-0.9.3-py2.py3-none-any.whl
    argparse-1.4.0-py2.py3-none-any.whl			pathtools-0.1.2-py3-none-any.whl
    asn1crypto-0.24.0-py2.py3-none-any.whl			phpserialize-1.3-py3-none-any.whl
    bidict-0.17.2-py3-none-any.whl				pycparser-2.18-py2.py3-none-any.whl
    boto3-1.7.42-py2.py3-none-any.whl			python_dateutil-2.7.3-py2.py3-none-any.whl
    botocore-1.10.42-py2.py3-none-any.whl			pytz-2018.4-py2.py3-none-any.whl
    cffi-1.11.5-cp36-cp36m-macosx_10_6_intel.whl		s3transfer-0.1.13-py2.py3-none-any.whl
    ciso8601-2.0.1.tar.gz					schemapi-0.3.0-py3-none-any.whl
    credstash-1.14.0-py3-none-any.whl			simplejson-3.15.0-cp36-cp36m-macosx_10_13_x86_64.whl
    cryptography-2.0.3-cp36-cp36m-macosx_10_6_intel.whl	six-1.11.0-py2.py3-none-any.whl
    dateutils-0.6.6-py3-none-any.whl			typing-3.5.3.0-py3-none-any.whl
    docutils-0.14-py3-none-any.whl				watchdog-0.8.3-cp36-cp36m-macosx_10_13_x86_64.whl
    expiringdict-1.1.4-py3-none-any.whl
    

    Here's a sanitized copy of my Bitbucket pipeline file:

    image: python:3.6.5
    
    pipelines:
      branches:
        develop:
          - step:
              script:
                - export AWS_ACCESS_KEY_ID=$DEV_AWS_ACCESS_KEY_ID
                - export AWS_SECRET_ACCESS_KEY=$DEV_AWS_SECRET_ACCESS_KEY
                - pip install -r requirements.txt
                - pip install pytest
                - python -m pytest tests/
                - chalice deploy --stage dev
              deployment: staging
    

    Here's a diff of the deployment package (Bitbucket deployed vs. manually deployed):

    Only in notifications-poller-dev-Bitbucket: MarkupSafe-1.0.dist-info
    Only in notifications-poller-dev-Bitbucket: PyYAML-3.13.dist-info
    Only in notifications-poller-dev-Bitbucket: _yaml.cpython-36m-x86_64-linux-gnu.so
    Only in notifications-poller-dev-Bitbucket: bidict
    Only in notifications-poller-dev-Bitbucket: bidict-0.17.2.dist-info
    Only in notifications-poller-dev-Bitbucket: ciso8601-2.0.1.dist-info
    Only in notifications-poller-dev-Bitbucket: ciso8601.cpython-36m-x86_64-linux-gnu.so
    Only in notifications-poller-dev-Bitbucket: dateutils
    Only in notifications-poller-dev-Bitbucket: dateutils-0.6.6.dist-info
    Only in notifications-poller-dev-Bitbucket: iniherit
    Only in notifications-poller-dev-Bitbucket: iniherit-0.3.9.dist-info
    Only in notifications-poller-dev-Bitbucket: markupsafe
    Only in notifications-poller-dev-Bitbucket: pathtools
    Only in notifications-poller-dev-Bitbucket: pathtools-0.1.2.dist-info
    Only in notifications-poller-dev-Bitbucket: phpserialize-1.3.dist-info
    Only in notifications-poller-dev-Bitbucket: phpserialize.py
    Only in notifications-poller-dev-Bitbucket: pycparser
    Only in notifications-poller-dev-Bitbucket: pycparser-2.18.dist-info
    Only in notifications-poller-dev-Bitbucket: schemapi
    Only in notifications-poller-dev-Bitbucket: schemapi-0.3.0.dist-info
    Only in notifications-poller-dev-Bitbucket: watchdog
    Only in notifications-poller-dev-Bitbucket: watchdog-0.8.3.dist-info
    Only in notifications-poller-dev-Bitbucket: yaml
    
    investigating 
    opened by ereboschi 15
  • Initial commit of S3 events

    Initial commit of S3 events

    This commit adds support for triggering a lambda function based on an S3 event happening such as an object being created or deleted. This is configured in chalice through a new on_s3_event decorator. Chalice assumes your s3 bucket already exists. However, it will configure bucket configuration and function policies for you. Given this is an existing bucket, we account for existing notification configurations already in place, and intelligently merge in the chalice specific lambda configuration as needed. Same goes for chalice delete, it will only remove the lambda configuration snippet it added to the S3 bucket notification config.

    Before this can be merged, we need a plan for this feature and chalice package.

    Because you provide chalice with an existing bucket, we aren't able to describe this in a CFN template (that is, CFN can't adopt existing resources into a stack). As a result, the chalice package command fails. There's a few options here. We can implement initial support for managed resources so we can create the bucket for you, or we can create a custom cfn resource that can apply this config to an existing bucket. Implementing managed resources doesn't solve the problem though of what to do with existing buckets. I think this is an important enough use case that I don't want to remove support for it just because CFN can't support this, so for me it boils down to trying to make this work with custom resources or just not support S3 events in chalice package until we add support for managed resources. Then at least customers would have a path forward if they wanted to use CFN to deploy their app. They'd just have to accept the tradeoff that a new bucket would be created for them.

    Issue #, if available:

    #855

    opened by jamesls 15
  • Bump attrs from 21.4.0 to 22.2.0

    Bump attrs from 21.4.0 to 22.2.0

    Bumps attrs from 21.4.0 to 22.2.0.

    Release notes

    Sourced from attrs's releases.

    22.2.0

    Highlights

    It's been a lot busier than the changelog indicates, but a lot of the work happened under the hood (like some impressive performance improvements). But we've got still one big new feature that's are worthy the holidays:

    Fields now have an alias argument that allows you to set the field's name in the generated __init__ method. This is especially useful for those who aren't fans of attrs's behavior of stripping underscores from private attribute names.

    Special Thanks

    This release would not be possible without my generous sponsors! Thank you to all of you making sustainable maintenance possible! If you would like to join them, go to https://github.com/sponsors/hynek and check out the sweet perks!

    Above and Beyond

    Variomedia AG (@​variomedia), Tidelift (@​tidelift), Sentry (@​getsentry), HiredScore (@​HiredScore), FilePreviews (@​filepreviews), and Daniel Fortunov (@​asqui).

    Maintenance Sustainers

    @​rzijp, Adam Hill (@​adamghill), Dan Groshev (@​si14), Tamir Bahar (@​tmr232), Adi Roiban (@​adiroiban), Magnus Watn (@​magnuswatn), David Cramer (@​dcramer), Moving Content AG (@​moving-content), Stein Magnus Jodal (@​jodal), Iwan Aucamp (@​aucampia), ProteinQure (@​ProteinQure), Jesse Snyder (@​jessesnyder), Rivo Laks (@​rivol), Thomas Ballinger (@​thomasballinger), @​medecau, Ionel Cristian Mărieș (@​ionelmc), The Westervelt Company (@​westerveltco), Philippe Galvan (@​PhilippeGalvan), Birk Jernström (@​birkjernstrom), Jannis Leidel (@​jezdez), Tim Schilling (@​tim-schilling), Chris Withers (@​cjw296), and Christopher Dignam (@​chdsbd).

    Not to forget 2 more amazing humans who chose to be generous but anonymous!

    Full Changelog

    Backwards-incompatible Changes

    • Python 3.5 is not supported anymore. #988

    Deprecations

    • Python 3.6 is now deprecated and support will be removed in the next release. #1017

    Changes

    • attrs.field() now supports an alias option for explicit __init__ argument names.

      Get __init__ signatures matching any taste, peculiar or plain! The PEP 681 compatible alias option can be use to override private attribute name mangling, or add other arbitrary field argument name overrides. #950

    • attrs.NOTHING is now an enum value, making it possible to use with e.g. typing.Literal. #983

    • Added missing re-import of attr.AttrsInstance to the attrs namespace. #987

    • Fix slight performance regression in classes with custom __setattr__ and speedup even more. #991

    • Class-creation performance improvements by switching performance-sensitive templating operations to f-strings.

      You can expect an improvement of about 5% -- even for very simple classes. #995

    ... (truncated)

    Changelog

    Sourced from attrs's changelog.

    22.2.0 - 2022-12-21

    Backwards-incompatible Changes

    • Python 3.5 is not supported anymore. #988

    Deprecations

    • Python 3.6 is now deprecated and support will be removed in the next release. #1017

    Changes

    • attrs.field() now supports an alias option for explicit __init__ argument names.

      Get __init__ signatures matching any taste, peculiar or plain! The PEP 681 compatible alias option can be use to override private attribute name mangling, or add other arbitrary field argument name overrides. #950

    • attrs.NOTHING is now an enum value, making it possible to use with e.g. typing.Literal. #983

    • Added missing re-import of attr.AttrsInstance to the attrs namespace. #987

    • Fix slight performance regression in classes with custom __setattr__ and speedup even more. #991

    • Class-creation performance improvements by switching performance-sensitive templating operations to f-strings.

      You can expect an improvement of about 5% -- even for very simple classes. #995

    • attrs.has() is now a TypeGuard for AttrsInstance. That means that type checkers know a class is an instance of an attrs class if you check it using attrs.has() (or attr.has()) first. #997

    • Made attrs.AttrsInstance stub available at runtime and fixed type errors related to the usage of attrs.AttrsInstance in Pyright. #999

    • On Python 3.10 and later, call abc.update_abstractmethods() on dict classes after creation. This improves the detection of abstractness. #1001

    • attrs's pickling methods now use dicts instead of tuples. That is safer and more robust across different versions of a class. #1009

    • Added attrs.validators.not_(wrapped_validator) to logically invert wrapped_validator by accepting only values where wrapped_validator rejects the value with a ValueError or TypeError (by default, exception types configurable). #1010

    • The type stubs for attrs.cmp_using() now have default values. #1027

    • To conform with PEP 681, attr.s() and attrs.define() now accept unsafe_hash in addition to hash. #1065

    ... (truncated)

    Commits
    • a9960de Prepare 22.2.0
    • 566248a Don't linkcheck tree links
    • 0f62805 Make towncrier marker independent from warning
    • b9f35eb Fix minor stub issues (#1072)
    • 4ad4ea0 Use MyST-native include
    • 519423d Use MyST-native doctest blocks in all MD
    • 403adab Remove stray file
    • 6957e4a Use new typographic branding in the last rst file, too
    • 1bb2864 Convert examples.rst to md
    • c1c24cc Convert glossary.rst to md
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependency-issue 
    opened by dependabot[bot] 0
  • Amazon EventBridge Scheduler Support

    Amazon EventBridge Scheduler Support

    Hello,

    Are there plans to support Amazon EventBridge Scheduler as an Event Source?

    It seems more flexible with customizing the input of a scheduled event, whereas EventBridge Rules do not support input transformers on scheduled events.

    opened by dubielt 0
  • Chalice deploy  is not adding the json file when deployed

    Chalice deploy is not adding the json file when deployed

    The project runs when locally hosted Chalice local.

    when running Chalice deploy the package missing the project.json to added

    gc = pygsheets.authorize(service_file='project.json')

    how do i fix this?

    opened by vensilver 0
  • Question: Websocket connection in chalice local

    Question: Websocket connection in chalice local

    I apologize if the question has already been asked before, I could not find anything substantial looking through the issues and docs. Is there a reference or minimal example which demonstrates how to connect to websocket when running chalice local?

    Following is my setup, however I encounter errors when trying to connect to ws, only errors locally:

    Enable and register websocket support and session

    app = Chalice(app_name="testapp")
    app.experimental_feature_flags.update(
        [
            "WEBSOCKETS",
        ]
    )
    app.websocket_api.session = boto3.Session()
    
    # It doesn't get this far, but including below for completeness
    @app.on_ws_message()
    def _ws_message(event):
        return Response(
            {
                "message": "Success!",
            },
            status_code=200,
        )
    

    Client code for connection

    Working version (deployed)

    url = "wss://####.execute-api.#####.amazonaws.com/api/"
    
    async def connect():
        async with websockets.connect(
            url,
        ) as ws:
            print("Connected to the switch.")
    
    if __name__ == "__main__":
        asyncio.run(connect())
    

    Non-working version (chalice local)

    url = "ws://localhost:8000/"
    
    async def connect():
        async with websockets.connect(
            url,
        ) as ws:
            print("Connected to the switch.")
    
    if __name__ == "__main__":
        asyncio.run(connect())
    

    Error received for the above non-working version:

    websockets.exceptions.InvalidStatusCode: server rejected WebSocket connection: HTTP 200
    

    From what I understand, it looks like the connection does not get upgraded properly in local mode. It just hits the "/" http route and accepts the response which is just a {"message": "OK"} in this case.

    If someone could point me in the right direction, that'd be great!

    Thank you for your help!

    opened by sreenathan-nair 0
  • DeleteConflict error with custom policy file

    DeleteConflict error with custom policy file

    With the same call to chalice deploy I am deploying a REST API and a lambda function. This is so that I can run the Lambda function asynchronously after responding to the request.

    @app.route('/v1/lambdaTest')
    def test_endpoint():
        _ = lambda_client.invoke(
            FunctionName='mvp-api-dev-testFunction',
            InvocationType='Event',
            Payload="{}"
        )
    
        return {'status_code': 200}
    
    
    @app.lambda_function(name='testFunction')
    def test_lambda_function(event, context):
        time.sleep(20)
    

    The deployed API Lambda does not have permission automatically to invoke api-dev-testFunction. This can be fixed by updating the policy in the AWS console but it must be done on every deployment.

    To avoid this, I created a policy file .chalice/policy-dev.json with the policy configuration, which is like this:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "arn:*:logs:*:*:*"
            },
            {
                "Sid": "InvokePermission",
                "Effect": "Allow",
                "Action": [
                    "lambda:InvokeFunction"
                ],
                "Resource": "*"
            }
        ]
    }
    

    The .chalice/config.json looks like this, with autogen_policy set to false, and pointing to the above policy file policy-dev.json:

    {
      "version": "2.0",
      "app_name": "mvp-api",
      "stages": {
        "dev": {
          "lambda_memory_size": 3008,
          "api_gateway_stage": "api",
          "autogen_policy": false,
          "iam_policy_file": "policy-dev.json"
        }
      }
    }
    

    After chalice deploy, the API works as expected, but an error is reported in the console. Here is the stack trace:

    (-----) user@system:~/projects/mvp-api$ chalice deploy
    /home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils.
      warnings.warn("Setuptools is replacing distutils.")
    Creating deployment package.
    Reusing existing deployment package.
    Updating policy for IAM role: mvp-api-dev-testFunction
    Updating lambda function: mvp-api-dev-testFunction
    Updating policy for IAM role: mvp-api-dev-api_handler
    Updating lambda function: mvp-api-dev
    Updating rest API
    Deleting IAM role: mvp-api-dev
    Traceback (most recent call last):
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/deploy/deployer.py", line 376, in deploy
        return self._deploy(config, chalice_stage_name)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/deploy/deployer.py", line 392, in _deploy
        self._executor.execute(plan)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/deploy/executor.py", line 42, in execute
        getattr(self, '_do_%s' % instruction.__class__.__name__.lower(),
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/deploy/executor.py", line 55, in _do_apicall
        result = method(**final_kwargs)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/awsclient.py", line 1062, in delete_role
        client.delete_role(RoleName=name)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/botocore/client.py", line 508, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/botocore/client.py", line 915, in _make_api_call
        raise error_class(parsed_response, operation_name)
    botocore.errorfactory.DeleteConflictException: An error occurred (DeleteConflict) when calling the DeleteRole operation: Cannot delete entity, must detach all policies first.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/cli/__init__.py", line 636, in main
        return cli(obj={})
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
        return self.main(*args, **kwargs)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/core.py", line 1055, in main
        rv = self.invoke(ctx)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/core.py", line 760, in invoke
        return __callback(*args, **kwargs)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
        return f(get_current_context(), *args, **kwargs)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/cli/__init__.py", line 189, in deploy
        deployed_values = d.deploy(config, chalice_stage_name=stage)
      File "/home/-----/anaconda3/envs/mvp38/lib/python3.8/site-packages/chalice/deploy/deployer.py", line 378, in deploy
        raise ChaliceDeploymentError(e)
    chalice.deploy.deployer.ChaliceDeploymentError: ERROR - While deploying your chalice application, received the following error:
    
     An error occurred (DeleteConflict) when calling the DeleteRole operation: 
     Cannot delete entity, must detach all policies first.
    

    Any ideas on the meaning of this error, and how to correct it?

    Many thanks to you!

    opened by cjmcmurtrie 1
  • How to get scopes from authorizer to inspect it?

    How to get scopes from authorizer to inspect it?

    According to Doc, it said

    Scopes can also be used with custom authorizers and built-in authorizers. These authorizers will need to inspect the access token to determine if access should be granted based on the scopes configured for the authorizer and route.

    from chalice import Blueprint
    
    extra_routes = Blueprint(__name__)
    
    @extra_routes.authorizer()
    def demo_auth(auth_request):
        token = auth_request.token
        # we can decode token in this part, and get scope that sets in token payload,
        # then how to get scopes from authorizer that sets in route using with_scopes.
        if token == 'allow':
            return AuthResponse(routes=['/'], principal_id='user')
        else:
            return AuthResponse(routes=[], principal_id='user')
    
    
    @extra_routes.route('/',  methods=['GET'], authorizer=demo_auth.with_scopes(["author", "editor"]))
    def index():
        return {'context': app.current_request.context}
    
    opened by hitrust 0
Owner
Amazon Web Services
Amazon Web Services
Serverless function for replicating weather underground data to an influxDB database

Weather Underground → Influx DB ?? Serverless function for replicating Weather U

Ben Meier 1 Dec 30, 2021
Portfolio-tracker - This serverless application let's you keep track of your investment portfolios

Portfolio-tracker - This serverless application let's you keep track of your investment portfolios

José Coelho 1 Jan 23, 2022
Zappa makes it super easy to build and deploy server-less, event-driven Python applications on AWS Lambda + API Gateway.

Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. Think of it as "serverless" web hosting for your Python apps. That means infinite scaling, zero downtime, zero maintenance - and at a fraction of the cost of your current deployments!

Zappa 2.2k Jan 9, 2023
DIAL(Did I Alert Lambda?) is a centralised security misconfiguration detection framework which completely runs on AWS Managed services like AWS API Gateway, AWS Event Bridge & AWS Lambda

DIAL(Did I Alert Lambda?) is a centralised security misconfiguration detection framework which completely runs on AWS Managed services like AWS API Gateway, AWS Event Bridge & AWS Lambda

CRED 71 Dec 29, 2022
Django Serverless Cron - Run cron jobs easily in a serverless environment

Django Serverless Cron - Run cron jobs easily in a serverless environment

Paul Onteri 41 Dec 16, 2022
A simple URL shortener app using Python AWS Chalice, AWS Lambda and AWS Dynamodb.

url-shortener-chalice A simple URL shortener app using AWS Chalice. Please make sure you configure your AWS credentials using AWS CLI before starting

Ranadeep Ghosh 2 Dec 9, 2022
Jenkins-AWS-CICD - Implement Jenkins CI/CD with AWS CodeBuild and AWS CodeDeploy, build a python flask web application.

Jenkins-AWS-CICD - Implement Jenkins CI/CD with AWS CodeBuild and AWS CodeDeploy, build a python flask web application.

Ning 1 Jan 1, 2022
A python-image-classification web application project, written in Python and served through the Flask Microframework

A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and Tensorflow wrappers, to make predictions on uploaded images.

Gerald Maduabuchi 19 Dec 12, 2022
An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet ??

Brandon Galbraith 276 Mar 3, 2021
Automated AWS account hardening with AWS Control Tower and AWS Step Functions

Automate activities in Control Tower provisioned AWS accounts Table of contents Introduction Architecture Prerequisites Tools and services Usage Clean

AWS Samples 20 Dec 7, 2022
AWS Interactive CLI - Allows you to execute a complex AWS commands by chaining one or more other AWS CLI dependency

AWS Interactive CLI - Allows you to execute a complex AWS commands by chaining one or more other AWS CLI dependency

Rafael Torres 2 Dec 10, 2021
Implement backup and recovery with AWS Backup across your AWS Organizations using a CI/CD pipeline (AWS CodePipeline).

Backup and Recovery with AWS Backup This repository provides you with a management and deployment solution for implementing Backup and Recovery with A

AWS Samples 8 Nov 22, 2022
This is a small notes web app, with python and flask microframework. Using sqlite3

Python Notes App. This is a small web application maked with flask-python for add notes easily and quickly. Dependencies. You can create a virtual env

Eduard 1 Dec 26, 2021
A toolkit for developing and deploying serverless Python code in AWS Lambda.

Python-lambda is a toolset for developing and deploying serverless Python code in AWS Lambda. A call for contributors With python-lambda and pytube bo

Nick Ficano 1.4k Jan 3, 2023
Python Flask API service, backed by DynamoDB, running on AWS Lambda using the traditional Serverless Framework.

Serverless Framework Python Flask API service backed by DynamoDB on AWS Python Flask API service, backed by DynamoDB, running on AWS Lambda using the

Andreu Jové 0 Apr 17, 2022
OpenTracing instrumentation for the Flask microframework

Flask-OpenTracing This package enables distributed tracing in Flask applications via The OpenTracing Project. Once a production system contends with r

3rd-Party OpenTracing API Contributions 133 Dec 19, 2022
💻 A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline!

LocalStack - A fully functional local AWS cloud stack LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications. Cur

LocalStack 45.3k Jan 2, 2023
Cookiecutter templates for Serverless applications using AWS SAM and the Rust programming language.

Cookiecutter SAM template for Lambda functions in Rust This is a Cookiecutter template to create a serverless application based on the Serverless Appl

AWS Samples 24 Nov 11, 2022