AWS SDK for Python

Overview

Boto3 - The AWS SDK for Python

Build Status Version Gitter

Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. You can find the latest, most up to date, documentation at our doc site, including a list of services that are supported.

Getting Started

Assuming that you have Python and virtualenv installed, set up your environment and install the required dependencies like this or you can install the library using pip:

$ git clone https://github.com/boto/boto3.git
$ cd boto3
$ virtualenv venv
...
$ . venv/bin/activate
$ python -m pip install -r requirements.txt
$ python -m pip install -e .
$ python -m pip install boto3

Using Boto3

After installing boto3

Next, set up credentials (in e.g. ~/.aws/credentials):

[default]
aws_access_key_id = YOUR_KEY
aws_secret_access_key = YOUR_SECRET

Then, set up a default region (in e.g. ~/.aws/config):

[default]
region=us-east-1

Other credentials configuration method can be found here

Then, from a Python interpreter:

>>> import boto3
>>> s3 = boto3.resource('s3')
>>> for bucket in s3.buckets.all():
        print(bucket.name)

Running Tests

You can run tests in all supported Python versions using tox. By default, it will run all of the unit and functional tests, but you can also specify your own nosetests options. Note that this requires that you have all supported versions of Python installed, otherwise you must pass -e or run the nosetests command directly:

$ tox
$ tox -- unit/test_session.py
$ tox -e py26,py33 -- integration/

You can also run individual tests with your default Python version:

$ nosetests tests/unit

Getting Help

We use GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them. Please use these community resources for getting help:

Contributing

We value feedback and contributions from our community. Whether it's a bug report, new feature, correction, or additional documentation, we welcome your issues and pull requests. Please read through this CONTRIBUTING document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your contribution.

Maintenance and Support for SDK Major Versions

Boto3 was made generally available on 06/22/2015 and is currently in the full support phase of the availability life cycle.

For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:

More Resources

Comments
  • ResourceWarning: unclosed ssl.SSLSocket

    ResourceWarning: unclosed ssl.SSLSocket

    For some reason I'm getting a ResourceWarning about a unclosed socket, even when I'm specifically closing the socket myself. See testcase below:

    python3 -munittest discover
    
    import sys
    import boto3
    import unittest
    
    BUCKET = ''
    KEY = ''
    
    
    def give_it_to_me():
        client = boto3.client('s3')
        obj = client.get_object(Bucket=BUCKET, Key=KEY)
        try:
            yield from iter(lambda: obj['Body'].read(1024), b'')
        finally:
            print('Im closing it!', file=sys.stderr, flush=True)
            obj['Body'].close()
    
    
    class TestSomeShit(unittest.TestCase):
        def test_it(self):
            res = give_it_to_me()
            for chunk in res:
                pass
            print('Done', file=sys.stderr, flush=True)
    

    Fill in any BUCKET and KEY to see the problem. Attaching my output below:

    Im closing it!
    test.py:22: ResourceWarning: unclosed <ssl.SSLSocket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('...', 55498), raddr=('...', 443)>
      for chunk in res:
    Done
    .
    ----------------------------------------------------------------------
    Ran 1 test in 0.696s
    
    OK
    
    feature-request needs-review 
    opened by LinusU 53
  • Connecting to SQS in docker after assume role/kubernetes IAM role not working

    Connecting to SQS in docker after assume role/kubernetes IAM role not working

    Please fill out the sections below to help us address your issue.

    What issue did you see ? logs-from-kubernetes.txt when inside docker, can't access role assumed on computer/iam role on kubernetes from my computer it works fine, it finds the credential and config files. when creating s3 client all works fine. this happens only in sqs client..

    Steps to reproduce If you have a runnable example, please include it as a snippet or link to a repository/gist for larger code examples. simple python (3.7.4) code, boto3 (1.14.2), just creating a client for sqs. if __name__ == '__main__': boto3.set_stream_logger('') sqs = boto3.client('sqs')

    Debug logs Full stack trace by adding boto3.set_stream_logger('') to your code. here is local docker, and attached kubernetes logs file

    2020-07-02 07:05:24,593 botocore.hooks [DEBUG] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
    2020-07-02 07:05:24,597 botocore.hooks [DEBUG] Changing event name from before-call.apigateway to before-call.api-gateway
    2020-07-02 07:05:24,598 botocore.hooks [DEBUG] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
    2020-07-02 07:05:24,602 botocore.hooks [DEBUG] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
    2020-07-02 07:05:24,602 botocore.hooks [DEBUG] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
    2020-07-02 07:05:24,604 botocore.hooks [DEBUG] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
    2020-07-02 07:05:24,605 botocore.hooks [DEBUG] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
    2020-07-02 07:05:24,612 botocore.hooks [DEBUG] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
    2020-07-02 07:05:24,613 botocore.hooks [DEBUG] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: env
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: assume-role
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: assume-role-with-web-identity
    2020-07-02 07:05:24,632 botocore.credentials [DEBUG] Looking for credentials via: sso
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: shared-credentials-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: custom-process
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: config-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: ec2-credentials-file
    2020-07-02 07:05:24,633 botocore.credentials [DEBUG] Looking for credentials via: boto-config
    2020-07-02 07:05:24,634 botocore.credentials [DEBUG] Looking for credentials via: container-role
    2020-07-02 07:05:24,634 botocore.credentials [DEBUG] Looking for credentials via: iam-role
    2020-07-02 07:05:24,635 urllib3.connectionpool [DEBUG] Starting new HTTP connection (1): 169.254.169.254:80
    2020-07-02 07:05:25,646 urllib3.connectionpool [DEBUG] Starting new HTTP connection (2): 169.254.169.254:80
    2020-07-02 07:05:26,660 botocore.utils [DEBUG] Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/meta-data/iam/security-credentials/: Read timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 426, in _make_request
        six.raise_from(e, None)
      File "<string>", line 3, in raise_from
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 421, in _make_request
        httplib_response = conn.getresponse()
      File "/usr/local/lib/python3.7/http/client.py", line 1336, in getresponse
        response.begin()
      File "/usr/local/lib/python3.7/http/client.py", line 306, in begin
        version, status, reason = self._read_status()
      File "/usr/local/lib/python3.7/http/client.py", line 267, in _read_status
        line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
      File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
        return self._sock.recv_into(b)
    socket.timeout: timed out
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
        chunked=self._chunked(request.headers),
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 725, in urlopen
        method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
      File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 379, in increment
        raise six.reraise(type(error), error, _stacktrace)
      File "/usr/local/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
        raise value
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
        chunked=chunked,
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 428, in _make_request
        self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
      File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 336, in _raise_timeout
        self, url, "Read timed out. (read timeout=%s)" % timeout_value
    urllib3.exceptions.ReadTimeoutError: AWSHTTPConnectionPool(host='169.254.169.254', port=80): Read timed out. (read timeout=1)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 342, in _get_request
        response = self._session.send(request.prepare())
      File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
        raise ReadTimeoutError(endpoint_url=request.url, error=e)
    botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "http://169.254.169.254/latest/meta-data/iam/security-credentials/"
    2020-07-02 07:05:26,669 botocore.utils [DEBUG] Max number of attempts exceeded (1) when attempting to retrieve data from metadata service.
    2020-07-02 07:05:26,671 botocore.loaders [DEBUG] Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json
    2020-07-02 07:05:26,681 botocore.hooks [DEBUG] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f503ec53b00>
    2020-07-02 07:05:26,696 botocore.loaders [DEBUG] Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/sqs/2012-11-05/service-2.json
    2020-07-02 07:05:26,701 botocore.hooks [DEBUG] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7f503eca0f80>
    Traceback (most recent call last):
      File "EnrichmentWorkerService.py", line 88, in <module>
        sqs = boto3.client('sqs')
      File "/usr/local/lib/python3.7/site-packages/boto3/__init__.py", line 91, in client
        return _get_default_session().client(*args, **kwargs)
      File "/usr/local/lib/python3.7/site-packages/boto3/session.py", line 263, in client
        aws_session_token=aws_session_token, config=config)
      File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 835, in create_client
        client_config=config, api_version=api_version)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 85, in create_client
        verify, credentials, scoped_config, client_config, endpoint_bridge)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 287, in _get_client_args
        verify, credentials, scoped_config, client_config, endpoint_bridge)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 73, in get_client_args
        endpoint_url, is_secure, scoped_config)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 153, in compute_client_args
        s3_config=s3_config,
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 218, in _compute_endpoint_config
        return self._resolve_endpoint(**resolve_endpoint_kwargs)
      File "/usr/local/lib/python3.7/site-packages/botocore/args.py", line 301, in _resolve_endpoint
        service_name, region_name, endpoint_url, is_secure)
      File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 361, in resolve
        service_name, region_name)
      File "/usr/local/lib/python3.7/site-packages/botocore/regions.py", line 134, in construct_endpoint
        partition, service_name, region_name)
      File "/usr/local/lib/python3.7/site-packages/botocore/regions.py", line 148, in _endpoint_for_partition
        raise NoRegionError()
    botocore.exceptions.NoRegionError: You must specify a region.
    
    
    guidance 
    opened by eldarnegrinperion 49
  • ImportError: cannot import name 'docevents' release 1.15.0

    ImportError: cannot import name 'docevents' release 1.15.0

    Describe the bug aws help command not functioning in release https://github.com/boto/boto3/releases/tag/1.15.0 I am using miniconda3 python runtime environment with the python implementation of AWS CLI. The error originally occurred on our CI pipeline.

    Steps to reproduce

    • pip install "boto3==1.15.0"
    • aws help Traceback (most recent call last): File "C:\Users\X\Miniconda3\Scripts\aws.cmd", line 50, in import awscli.clidriver File "C:\Users\X\Miniconda3\lib\site-packages\awscli\clidriver.py", line 36, in from awscli.help import ProviderHelpCommand File "C:\Users\X\Miniconda3\lib\site-packages\awscli\help.py", line 23, in from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents'

    Expected behavior aws help is displayed as when boto3 1.14.63 is installed

    opened by bruce-lindsay 47
  • Support AWS Athena waiter feature

    Support AWS Athena waiter feature

    Hi,

    If you go to https://boto3.readthedocs.io/en/latest/reference/services/athena.html#Athena.Client.get_waiter looks like that feature is not implemented.

    I have a lambda function which executes Athena queries. I use a function called start_query_execution() in boto3 and I need to write a loop to check if the execution is finished or not, so I think it will be awesome if we have waiter feature implemented in Athena.

    Thanks

    feature-request waiters 
    opened by xysr89 41
  • Add explanation on how to catch boto3 exceptions

    Add explanation on how to catch boto3 exceptions

    The problem I have with the boto3 documentation can be found here: https://stackoverflow.com/questions/46174385/properly-catch-boto3-errors

    Am I doing this right? Or what is best practice when dealing with boto3 exceptions? Can this be added to the wiki?

    documentation feature-request 
    opened by schumannd 38
  • How to Use botocore.response.StreamingBody as stdin PIPE

    How to Use botocore.response.StreamingBody as stdin PIPE

    I want to pipe large video files from AWS S3 into Popen's stdin. This code runs as an AWS Lambda function, so these files won't fit in memory or on the local file system. Also, I don't want to copy these huge files anywhere, I just want to stream the input, process on the fly, and stream the output. I've already got the processing and streaming output bits working. The problem is how to obtain an input stream as a Popen pipe.

    I can access a file in an S3 bucket:

    import boto3
    s3 = boto3.resource('s3')
    response = s3.Object(bucket_name=bucket, key=key).get()
    body = response['Body']  
    

    body is a botocore.response.StreamingBody. I intend to use body something like this:

    from subprocess import Popen, PIPE
    Popen(cmd, stdin=PIPE, stdout=PIPE).communicate(input=body)[0]
    

    But of course body needs to be converted into a file-like object. The question is how?

    opened by mslinn 38
  • Initial commit of S3 upload_file/download_file

    Initial commit of S3 upload_file/download_file

    This PR adds support for an intelligent upload_file/download_file method for boto3.

    The module docstring provides some general information and an overview of how to use the module.

    I'd like to get some initial feedback on this. There's unit/integ tests added (there's a few integration tests I've haven't fleshed out yet), and the code is fully functional, but I will be pushing some changes in a bit.

    There are two changes I plan on making:

    • I'm going to be changing the logic for the _download_range function. The single lock writer on the file unnecessarily slows down the parallel downloads. I'm likely going to port some version of the what the AWS CLI does to improve this.
    • The callback interface may need to change. It requires a lot of information that's not technically necessary and could be provided. It also doesn't handle retries. In order to do this, I might need to change the interface from a simple callback to an actual class that has a few required methods.

    Also, the socket timeouts and bandwidth throttling are not implemented. Those are stretch features I might end up deferring for now.

    There will also be another pull request that integrates this with the s3 client and s3 resource objects.

    cc @kyleknap @danielgtaylor

    enhancement 
    opened by jamesls 34
  • Upload or put object in S3 failing silently

    Upload or put object in S3 failing silently

    I've been trying to upload files from a local folder into folders on S3 using Boto3, and it's failing kinda silently, with no indication of why the upload isn't happening.

    key_name = folder + '/' 
     s3_connect = boto3.client('s3', s3_bucket_region,)
     # upload File to S3
     for filename in os.listdir(folder):
         s3_name = key_name + filename
         print folder, filename, key_name, s3_name
         upload = s3_connect.upload_file(
             s3_name, s3_bucket, key_name,
         )
    

    Printing upload just says "None", with no other information. No upload happens. I've also tried using put_object:

    put_obj = s3_connect.put_object(
            Bucket=s3_bucket, 
            Key=key_name,
            Body=s3_name,
        )
    

    and I get an HTTPS response code of 200 - but no files upload.

    First, I'd love to solve this problem, but second, it seems this isn't the right behavior - if an upload doesn't happen, there should be a bit more information about why (although I imagine this might be a limitation of the API?)

    s3 
    opened by maxpearl 32
  • Strange behavior when trying to create an S3 bucket in us-east-1

    Strange behavior when trying to create an S3 bucket in us-east-1

    Version info: boto3 = 0.0.19 (from pip) botocore = 1.0.0b1 (from pip) Python = 2.7.9 (from Fedora 22)

    I have no problem creating S3 buckets in us-west-1 or us-west-2, but specifying us-east-1 gives InvalidLocationConstraint

    >>> conn = boto3.client("s3")
    >>> conn.create_bucket(
        Bucket='testing123-blah-blah-blalalala', 
        CreateBucketConfiguration={'LocationConstraint': "us-east-1"})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 200, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 255, in _make_api_call
        raise ClientError(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
    

    Also trying with a s3 client connected directly to us-east-1:

    >>> conn = boto3.client("s3", region_name="us-east-1")
    >>> conn.create_bucket(Bucket='testing123-blah-blah-blalalala', CreateBucketConfiguration={'LocationConstraint': "us-east-1"})
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 200, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/usr/lib/python2.7/site-packages/botocore/client.py", line 255, in _make_api_call
        raise ClientError(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
    

    When I do not specify a region, the bucket is created in us-east-1 (verified in the web console):

    >>> conn.create_bucket(Bucket='testing123-blah-blah-blalalala')
    {u'Location': '/testing123-blah-blah-blalalala', 'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': 'Qq2CqKPm4PhADUJ8X+ngxxEE3yRrsT3DOS4TefgzUpYBKzQO/62cQy20yPa1zs7l', 'RequestId': '06B36B1D8B1213C8'}}
    

    ...but the bucket returns None for LocationConstraint:

    >>> conn.get_bucket_location(Bucket='testing123-blah-blah-blalalala')
    {'LocationConstraint': None, 'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': 'nBGHNu30A/m/RymzuoHLiE2uWuzCsz3v1mcov324r2sMYX7ANq1jOIR0XphWiUIAxDwmxTOW8eA=', 'RequestId': '53A539CC4BCA08C4'}}
    

    us-east-1 is listed as a valid region when I enumerate the regions:

    >>> conn = boto3.client("ec2", region_name="us-east-1")
    >>> [x["RegionName"] for x in conn.describe_regions()["Regions"]]
    ['eu-central-1', 'sa-east-1', 'ap-northeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2', 'ap-southeast-2', 'ap-southeast-1']
    
    needs-discussion 
    opened by ghost 32
  • Hang in s3.download_file with Celery worker in version 1.4.0

    Hang in s3.download_file with Celery worker in version 1.4.0

    I've been using lots of boto3 calls in my Flask app for some time, but the switch to the latest boto3 v1.4.0 has broken my Celery workers. Something that may be unique about my app is that I use S3 to download a secure environment variables file before launching my app or workers. It appears that the new boto3 works with my app, but hangs when launching the Celery worker.

    I would temporarily downgrade my boto3 to avoid the problem, but its been a long time since the last release, and I need the elbv2 support that only comes in 1.4.0.

    I've created a tiny version of my worker (worker2.py) to demonstrate the problem. I've verified that using the previous version boto3 1.3.1 results in the worker launching properly. I see all prints and the Celery worker banner output.

    If I install boto3 1.4.0, then the second print() statement "Download complete" is never reached. Also note that I tried following the new doc example with boto3.resource and using s3.meta.client, but that fails as well.

    #
    # Stub Celery worker to demonstrate bug in Boto3 1.4.0. Works fine with previous version Boto3 1.3.1.
    # Test with: celery worker -A worker2.celery
    #
    from flask import Flask
    from celery import Celery
    import boto3
    import tempfile
    
    celery = Celery(__name__, broker='amqp://guest:guest@localhost:5672//')
    
    app = Flask(__name__)
    
    s3 = boto3.client('s3', region_name='us-west-1')
    env_file = 'APPNAME.APPSTAGE.env'
    with tempfile.NamedTemporaryFile() as s3_file:
        print("Downloading file...")
        response = s3.download_file('APPBUCKET', env_file, s3_file.name)
        print("Download complete!")
    

    You can test it by running the following at the command line:

    celery worker -A worker2.celery
    

    Also note that just running the code downloads the file just fine with 1.4.0:

    python worker2.py
    
    opened by dmulter 30
  • KeyError: 'endpoint_resolver'

    KeyError: 'endpoint_resolver'

    Hi,

    I sometimes get that error trying to call a lambda function boto3==1.3.1

    def lambda_execute(payload):
        import boto3
        client = boto3.client('lambda', aws_access_key_id=KEY, aws_secret_access_key=SECRET region_name=REGION)
        client.invoke(**payload)
    

    payload is in this format:

    {'FunctionName': fct, 'InvocationType': 'Event', 'LogType': 'None', 'Payload': simplejson.dumps(payload, default=encode_model)}
    

    error seems to be coming from get_component in botocore/session.py

    screenshot 2016-09-07 11 35 14

    Can you help ?

    question guidance closed-for-staleness 
    opened by CosmicAnalogue465 30
  • recommended method for slowing down speech does not work

    recommended method for slowing down speech does not work

    Describe the bug

    In the documentation listed here

    https://docs.aws.amazon.com/polly/latest/dg/voice-speed-vip.html

    it says to use the following tags to slow speech down

    In some cases, it might help your audience to slow the speaking rate slightly to aid in comprehension.

    However, it does not work. The computer will actually say: 'speak' and 'prosody rate'

    My code is as follows where s is the string I'm trying to process, I can't use < > in github because they're special characters so in place of < I will use [

    s = f'[speak][prosody rate="90%"]{s}[/prosody][/speak]'

    I would attach my file but .mp3 are not accepted.

    Expected Behavior

    Output speech text without saying 'speak' and 'prosody'

    Current Behavior

    Does not outputs speech text without saying 'speak' and 'prosody'

    Reproduction Steps

    see above.

    Possible Solution

    No response

    Additional Information/Context

    No response

    SDK version used

    don't know

    Environment details (OS name and version, etc.)

    Mac OS 12.2 Python 3.8

    response-requested polly 
    opened by kylefoley76 5
  • Garbage collection using `gc.collect()` is not showing any effect

    Garbage collection using `gc.collect()` is not showing any effect

    Describe the bug

    When we invoke boto3 client methods multiple times/ running in some kind of loop for n times, memory is getting accumulated with each iteration. Even if we call the gc.collect() it also not showing any effect

    Expected Behavior

    1. Garbage collection should happen properly all the unused resources should be removed

    Current Behavior

    If we are running some boto3 code in loop for n times, memory is accumulating with each iteration gc.collect() not releasing unused memory. At the end of the program gc.collect() returning 0 unreachable objects but this also doesn't show any change in memory usage.

    Reproduction Steps

    import gc
    import os
    import boto3
    
    gc.set_debug(gc.DEBUG_UNCOLLECTABLE)
    
    boto3.set_stream_logger('')
    
    def get_memory_usage():
        return psutil.Process(os.getpid()).memory_info().rss // 1024 ** 2
    
    
    def test():
        queue_url = 'https://us-east-2.queue.amazonaws.com/916470431480/test.fifo'
        sqs = boto3.client('sqs')
        for i in range(10):
            message = sqs.receive_message(QueueUrl=queue_url)
            if message.Get ('Messages'):
                print(message)
                recept_handle = message['Messages'][0]['ReceiptHandle']
                sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=recept_handle)
    
            print(f'Iteration - {i + 1} Unreachable Objects: {gc.collect()} and length: {len(gc.garbage)}')
            print(f'Memory usage After: {get_memory_usage()}mb')
    
    
    for _ in range(5):
        print(f'Memory usage Before: {get_memory_usage()}mb')
        test()
        print(f'==================Unreachable Objects: {gc.collect()}==================')
        print(len(gc.garbage))
        print(f'Memory usage After: {get_memory_usage()}mb')
    
        print('\n' * 5)
    

    Attached sample code we can reproduce the issue by running the above code

    Possible Solution

    No response

    Additional Information/Context

    Logs: log.txt

    SDK version used

    1.26.37

    Environment details (OS name and version, etc.)

    Linux 5.15.84-1-MANJARO

    sqs 
    opened by sudouser777 7
  • list_objects response does not decode prefix

    list_objects response does not decode prefix

    Describe the bug

    Hi, I've noticed that in the response of list_objects with a prefix, the client does not decode the prefix. for example. we have the 2 tests:

    @attr(resource='bucket')
    @attr(method='get')
    @attr(operation='list under prefix')
    @attr(assertion='returns only objects under prefix')
    @attr('fails_on_dbstore')
    def test_bucket_list_prefix_basic():
        key_names = ['foo/bar', 'foo/baz', 'quux']
        bucket_name = _create_objects(keys=key_names)
        client = get_client()
    
        response = client.list_objects(Bucket=bucket_name, Prefix='foo/')
        eq(response['Prefix'], 'foo/')
    
        keys = _get_keys(response)
        prefixes = _get_prefixes(response)
        eq(keys, ['foo/bar', 'foo/baz'])
        eq(prefixes, [])
    
    @attr(resource='bucket')
    @attr(method='get')
    @attr(operation='list under prefix with list-objects-v2')
    @attr(assertion='returns only objects under prefix')
    @attr('list-objects-v2')
    @attr('fails_on_dbstore')
    def test_bucket_listv2_prefix_basic():
        key_names = ['foo/bar', 'foo/baz', 'quux']
        bucket_name = _create_objects(keys=key_names)
        client = get_client()
    
        response = client.list_objects_v2(Bucket=bucket_name, Prefix='foo/')
        eq(response['Prefix'], 'foo/')
    
        keys = _get_keys(response)
        prefixes = _get_prefixes(response)
        eq(keys, ['foo/bar', 'foo/baz'])
        eq(prefixes, [])
    

    The only difference between the 2 tests is that the first is using list_objects while the second is using list_objects_v2 (version of list objects) - those tests are from the project Ceph-S3 tests. I printed the response in both cases and noticed that the response is not decoded back in the prefix field:

    
    {'ResponseMetadata': {'RequestId': 'lbt12buh-6t7zlf-j0f', 'HostId': 'lbt12buh-6t7zlf-j0f', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'lbt12buh-6t7zlf-j0f', 'x-amz-id-2': 'lbt12buh-6t7zlf-j0f', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '818', 'date': 'Sun, 18 Dec 2022 07:09:19 GMT', 'connection': 'keep-alive', 'keep-alive': 'timeout=5'}, 'RetryAttempts': 0}, 'IsTruncated': False, 'Marker': '', 'Contents': [{'Key': 'foo/bar', 'LastModified': datetime.datetime(2022, 12, 18, 7, 9, 18, tzinfo=tzlocal()), 'ETag': '"82d0f0fa8551de8b7eb5ecb65eae0261"', 'Size': 7, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}}, {'Key': 'foo/baz', 'LastModified': datetime.datetime(2022, 12, 18, 7, 9, 19, tzinfo=tzlocal()), 'ETag': '"2b92cb3da20fd0dd9b62b614dbcbe9b3"', 'Size': 7, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}}], 'Name': 'ceph-kdwa0al2iozquphj5vjsl42h-1', 'Prefix': 'foo%2F', 'MaxKeys': 1000, 'EncodingType': 'url'}
    
    
    
    {'ResponseMetadata': {'RequestId': 'lbt1ap8q-c0jr0-vk8', 'HostId': 'lbt1ap8q-c0jr0-vk8', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'lbt1ap8q-c0jr0-vk8', 'x-amz-id-2': 'lbt1ap8q-c0jr0-vk8', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '703', 'date': 'Sun, 18 Dec 2022 07:15:49 GMT', 'connection': 'keep-alive', 'keep-alive': 'timeout=5'}, 'RetryAttempts': 0}, 'IsTruncated': False, 'Contents': [{'Key': 'foo/bar', 'LastModified': datetime.datetime(2022, 12, 18, 7, 15, 49, tzinfo=tzlocal()), 'ETag': '"82d0f0fa8551de8b7eb5ecb65eae0261"', 'Size': 7, 'StorageClass': 'STANDARD'}, {'Key': 'foo/baz', 'LastModified': datetime.datetime(2022, 12, 18, 7, 15, 49, tzinfo=tzlocal()), 'ETag': '"2b92cb3da20fd0dd9b62b614dbcbe9b3"', 'Size': 7, 'StorageClass': 'STANDARD'}], 'Name': 'ceph-5s1koiijx2gac6rn7jvy3cr3-1', 'Prefix': 'foo/', 'MaxKeys': 1000, 'EncodingType': 'url', 'KeyCount': 2}
    
    

    See the difference between 'Prefix': 'foo%2F' (not decoded back) and 'Prefix': 'foo/' (decoded)

    Expected Behavior

    To see the prefix in the response decoded, which means: 'Prefix': 'foo%2F'

    Current Behavior

    prefix is not decoded in the client: 'Prefix': 'foo%2F'

    Reproduction Steps

    you can use the 2 tests as described above.

    Possible Solution

    The prefix needs to be decoded in the client in list_object (as it is decoded in list_object_v2);

    Additional Information/Context

    I was not sure how the last 2 questions is helping - but I answered them anyway: SDK version used - server side (the issue is about the boto3 client). Environment details (OS name and version, etc.) - my station.

    SDK version used

    2 server 3 client - boto3

    Environment details (OS name and version, etc.)

    MacOS 12.6.1

    s3 
    opened by shirady 2
  • S3 resource meta client copy tagging behaviour not documented

    S3 resource meta client copy tagging behaviour not documented

    Describe the issue

    When using the meta client to copy files from one bucket to another, tags are only copied when the file size of the object is less than 8MB.

    Looking through the documentation, there is nothing mentioned around the tagging behaviour when using this method. Would it be possible to update the documentation to include this behaviour?

    Find below the code snippet that I used to copy the object.

    import boto3
    s3_resource = boto3.resource('s3')
    key = "test.txt"
    copy_source = {
        "Bucket": ORIGIN_BUCKET,
        "Key": key
    }
    
    s3_resource.meta.client.copy(copy_source, DESTINATION_BUCKET, key)
    

    Links

    Link to the description of the method in the docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.copy

    Link to the file that lead me to test the behaviour with 7MB, 8MB and 9MB files. With only the 7MB file having the tags present on the object in the destination bucket: https://github.com/boto/boto3/blob/develop/boto3/s3/transfer.py#L170

    documentation s3 resources p3 
    opened by GCHQDeveloper9491 1
  • ObjectVersion filter method does not respect MaxKeys

    ObjectVersion filter method does not respect MaxKeys

    Describe the bug

    When calling filter(...) with a value for MaxKeys under the limit of 1000, the response includes more items than requested.

    Expected Behavior

    The documentation for the MaxKeys parameter states:

    MaxKeys (integer) -- Sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more. If additional keys satisfy the search criteria, but were not returned because max-keys was exceeded, the response contains <isTruncated>true</isTruncated>. To return the additional keys, see key-marker and version-id-marker.
    

    I would expect that if MaxKeys is set and the filter criteria applies to more items, that the response would be limited to the number of requested keys and that the response would include the truncation value as called out in the docs.

    Alternatively, if there are reasons why the API supports it but boto3 does not, I would expect that the boto3 documentation would explain the delta (i.e. "This is deprecated" or "Don't use this, it doesn't do what you expect")

    Current Behavior

    The call to filter(...) returns all items with the given prefix regardless of the value provided for MaxKeys

    Reproduction Steps

    import boto3
    s3_rsrc = boto3.resource('s3')
    
    # Not actual bucket, but can provide via secure channel if required
    # Bucket prefix combo previously initialized with 79 items under the prefix,
    # Each item has ~10 versions
    bucket_name = 'dataset-abc-123456789012-us-west-2'
    prefix = 'AAQICCAV'
    bucket = s3_rsrc.Bucket(bucket_name)
    kwargs = {'Prefix': prefix, 'MaxKeys': 10}
    
    resp = bucket.object_versions.filter(**kwargs)
    
    item_count = 0
    key_count = 0
    last_key = None
    for item in resp:
        if item.key != last_key:
            key_count += 1
            last_key = item.key
        item_count += 1
    
    print(f"Item count: {item_count}, Key count: {key_count}")
    

    Output:

    Item count: 783, Key count: 79
    

    Possible Solution

    No response

    Additional Information/Context

    No response

    SDK version used

    1.26.24

    Environment details (OS name and version, etc.)

    MacOS 12.6.1, Python 3.10.8

    bug s3 resources 
    opened by corey-cole 2
Releases(0.0.14)
  • 0.0.14(Apr 9, 2015)

    • feature:Resources: Update to the latest resource models for:
    • AWS CloudFormation
    • Amazon EC2
    • AWS IAM
    • feature:Amazon S3: Add an upload_file and download_file to S3 clients that transparently handle parallel multipart transfers.
    • feature:Botocore: Update to Botocore 0.102.0.
    • Add support for Amazon Machine Learning.
    • Add support for Amazon Workspaces.
    • Update requests to 2.6.0.
    • Update AWS Lambda to the latest API.
    • Update Amazon EC2 Container Service to the latest API.
    • Update Amazon S3 to the latest API.
    • Add DBSnapshotCompleted support to Amazon RDS waiters.
    • Fixes for the REST-JSON protocol.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.13(Apr 3, 2015)

    • feature:Botocore: Update to Botocore 0.100.0.
    • Update AWS CodeDeploy to the latest service API.
    • Update Amazon RDS to support the describe_certificates service operation.
    • Update Amazon Elastic Transcoder to support PlayReady DRM.
    • Update Amazon EC2 to support D2 instance types.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.12(Mar 27, 2015)

    • feature:Resources: Add the ability to load resource data from a has relationship. This saves a call to load when available, and otherwise fixes a problem where there was no way to get at certain resource data. (issue 74,
    • feature:Botocore: Update to Botocore 0.99.0
    • Update service models for amazon Elastic Transcoder, AWS IAM and AWS OpsWorks to the latest versions.
    • Add deprecation warnings for old interface.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.11(Mar 24, 2015)

    • feature:Resources: Add Amazon EC2 support for ClassicLink actions and add a delete action to EC2 Volume resources.
    • feature:Resources: Add a load operation and user reference to AWS IAM's CurrentUser resource. (issue 72,
    • feature:Resources: Add resources for AWS IAM managed policies. (issue 71)
    • feature:Botocore: Update to Botocore 0.97.0
    • Add new Amazon EC2 waiters.
    • Add support for Amazon S3 cross region replication.
    • Fix an issue where empty config values could not be specified for Amazon S3's bucket notifications. (botocore issue 495)
    • Update Amazon CloudWatch Logs to the latest API.
    • Update Amazon Elastic Transcoder to the latest API.
    • Update AWS CloudTrail to the latest API.
    • Fix bug where explicitly passed profile_name will now override any access and secret keys set in environment variables. (botocore issue 486)
    • Add endpoint_url to client.meta.
    • Better error messages for invalid regions.
    • Fix creating clients with unicode service name.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.10(Mar 24, 2015)

    • bugfix:Documentation: Name collisions are now handled at the resource model layer instead of the factory, meaning that the documentation now uses the correct names. (issue 67)
    • feature:Session: Add a region_name option when creating a session. (issue 69, issue 21)
    • feature:Botocore: Update to Botocore 0.94.0
    • Update to the latest Amazon CloudeSearch API.
    • Add support for near-realtime data updates and exporting historical data from Amazon Cognito Sync.
    • Removed the ability to clone a low-level client. Instead, create a new client with the same parameters.
    • Add support for URL paths in an endpoint URL.
    • Multithreading signature fixes.
    • Add support for listing hosted zones by name and getting hosted zone counts from Amazon Route53.
    • Add support for tagging to AWS Data Pipeline.
    Source code(tar.gz)
    Source code(zip)
  • 0.0.9(Feb 20, 2015)

    • feature:Botocore: Update to Botocore 0.92.0
    • Add support for the latest Amazon EC2 Container Service API.
    • Allow calling AWS STS assume_role_with_saml without credentials.
    • Update to latest Amazon CloudFront API
    • Add support for AWS STS regionalized calls by passing both a region name and an endpoint URL. (botocore issue 464)
    • Add support for Amazon Simple Systems Management Service (SSM)
    • Fix Amazon S3 auth errors when uploading large files to the eu-central-1 and cn-north-1 regions. (botocore issue 462)
    • Add support for AWS IAM managed policies
    • Add support for Amazon ElastiCache tagging
    • Add support for Amazon Route53 Domains tagging of domains
    Source code(tar.gz)
    Source code(zip)
  • 0.0.8(Feb 11, 2015)

    • bugfix:Resources: Fix Amazon S3 resource identifier order. (issue 62)
    • bugfix:Resources: Fix collection resource hydration path. (issue 61)
    • bugfix:Resources: Re-enable service-level access to all resources, allowing e.g. obj = s3.Object('bucket', 'key'). (issue 60)
    • feature:Botocore: Update to Botocore 0.87.0
    • Add support for Amazon DynamoDB secondary index scanning.
    • Upgrade to requests 2.5.1.
    • Add support for anonymous (unsigned) clients. (botocore issue 448)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.7(Feb 5, 2015)

    • feature:Resources: Enable support for Amazon Glacier.
    • feature:Resources: Support plural references and nested JMESPath queries for data members when building parameters and identifiers. (issue 52)
    • feature:Resources: Update to the latest resource JSON format.This is a backward-incompatible change as not all resources are exposed at the service level anymore. For example, s3.Object('bucket', 'key') is now s3.Bucket('bucket').Object('key'). (issue 51)
    • feature:Resources: Make resource.meta a proper object. This allows you to do things like resource.meta.client. This is a backward-incompatible change. (issue 45)
    • feature:Dependency: Update to JMESPath 0.6.1
    • feature:Botocore: Update to Botocore 0.86.0
    • Add support for AWS CloudHSM
    • Add support for Amazon EC2 and Autoscaling ClassicLink
    • Add support for Amazon EC2 Container Service (ECS)
    • Add support for encryption at rest and CloudHSM to Amazon RDS
    • Add support for Amazon DynamoDB online indexing.
    • Add support for AWS ImportExport get_shipping_label.
    • Add support for Amazon Glacier.
    • Add waiters for AWS ElastiCache. (botocore issue 443)
    • Fix an issue with Amazon CloudFront waiters. (botocore issue 426)
    • Allow binary data to be passed to UserData. (botocore issue 416)
    • Fix Amazon EMR endpoints for eu-central-1 and cn-north-1. (botocore issue 423)
    • Fix issue with base64 encoding of blob types for Amazon EMR. (botocore issue 413)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.6(Dec 18, 2014)

    • feature:Amazon SQS: Add purge action to queue resources
    • feature:Waiters: Add documentation for client and resource waiters (issue 44)
    • feature:Waiters: Add support for resource waiters (issue 43)
    • bugfix:Installation: Remove dependency on the unused six module (issue 42)
    • feature:Botocore: Update to Botocore 0.80.0
    • Update Amazon Simple Workflow Service (SWF) to the latest version
    • Update AWS Storage Gateway to the latest version
    • Update AWS Elastic MapReduce (EMR) to the latest version
    • Update AWS Elastic Transcoder to the latest version
    • Enable use of page_size for clients (botocore issue 408)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.5(Dec 16, 2014)

    • feature: Add support for batch actions on collections. (issue 32)
    • feature: Update to Botocore 0.78.0
    • Add support for Amazon Simple Queue Service purge queue which allows users to delete the messages in their queue.
    • Add AWS OpsWorks support for registering and assigning existing Amazon EC2 instances and on-premises servers.
    • Fix issue with expired signatures when retrying failed requests (botocore issue 399)
    • Port Route53 resource ID customizations from AWS CLI to Botocore. (botocore issue 398)
    • Fix handling of blob type serialization for JSON services. (botocore issue 397)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.4(Dec 4, 2014)

    • feature: Update to Botocore 0.77.0
    • Add support for Kinesis PutRecords operation. It writes multiple data records from a producer into an Amazon Kinesis stream in a single call.
    • Add support for IAM GetAccountAuthorizationDetails operation. It retrieves information about all IAM users, groups, and roles in your account, including their relationships to one another and their attached policies.
    • Add support for updating the comment of a Route53 hosted zone.
    • Fix base64 serialization for JSON protocol services.
    • Fix issue where certain timestamps were not being accepted as valid input (botocore issue 389)
    • feature: Update Amazon EC2 resource model.
    • feature: Support belongsTo resource reference as well as path specified in an action's resource definition.
    • bugfix: Fix an issue accessing SQS message bodies (issue 33)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Nov 26, 2014)

    • feature: Update to Botocore 0.76.0.
    • Add support for using AWS Data Pipeline templates to create pipelines and bind values to parameters in the pipeline
    • Add support to Amazon Elastic Transcoder client for encryption of files in Amazon S3.
    • Fix issue where Amazon S3 requests were not being resigned correctly when using Signature Version 4. (botocore issue 388)
    • Add support for custom response parsing in Botocore clients. (botocore issue 387)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.2(Nov 26, 2014)

  • 0.0.1(Nov 26, 2014)

DIAL(Did I Alert Lambda?) is a centralised security misconfiguration detection framework which completely runs on AWS Managed services like AWS API Gateway, AWS Event Bridge & AWS Lambda

DIAL(Did I Alert Lambda?) is a centralised security misconfiguration detection framework which completely runs on AWS Managed services like AWS API Gateway, AWS Event Bridge & AWS Lambda

CRED 71 Dec 29, 2022
Automated AWS account hardening with AWS Control Tower and AWS Step Functions

Automate activities in Control Tower provisioned AWS accounts Table of contents Introduction Architecture Prerequisites Tools and services Usage Clean

AWS Samples 20 Dec 7, 2022
Implement backup and recovery with AWS Backup across your AWS Organizations using a CI/CD pipeline (AWS CodePipeline).

Backup and Recovery with AWS Backup This repository provides you with a management and deployment solution for implementing Backup and Recovery with A

AWS Samples 8 Nov 22, 2022
Graviti-python-sdk - Graviti Data Platform Python SDK

Graviti Python SDK Graviti Python SDK is a python library to access Graviti Data

Graviti 13 Dec 15, 2022
AWS SDK for Python

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

the boto project 7.8k Jan 8, 2023
Python + AWS Lambda Hands OnPython + AWS Lambda Hands On

Python + AWS Lambda Hands On Python Criada em 1990, por Guido Van Rossum. "Bala de prata" (quase). Muito utilizado em: Automatizações - Selenium, Beau

Marcelo Ortiz de Santana 8 Sep 9, 2022
Aws-cidr-finder - A Python CLI tool for finding unused CIDR blocks in AWS VPCs

aws-cidr-finder Overview An Example Installation Configuration Contributing Over

Cooper Walbrun 18 Jul 31, 2022
Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.

aws-allowlister Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance fr

Salesforce 189 Dec 8, 2022
SSH-Restricted deploys an SSH compliance rule (AWS Config) with auto-remediation via AWS Lambda if SSH access is public.

SSH-Restricted SSH-Restricted deploys an SSH compliance rule with auto-remediation via AWS Lambda if SSH access is public. SSH-Auto-Restricted checks

Adrian Hornsby 30 Nov 8, 2022
AWS Auto Inventory allows you to quickly and easily generate inventory reports of your AWS resources.

Photo by Denny Müller on Unsplash AWS Automated Inventory ( aws-auto-inventory ) Automates creation of detailed inventories from AWS resources. Table

AWS Samples 123 Dec 26, 2022
A suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier

A suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier

Amazon Web Services - Labs 1.9k Jan 7, 2023
aws-lambda-scheduler lets you call any existing AWS Lambda Function you have in a future time.

aws-lambda-scheduler aws-lambda-scheduler lets you call any existing AWS Lambda Function you have in the future. This functionality is achieved by dyn

Oğuzhan Yılmaz 57 Dec 17, 2022
Project template for using aws-cdk, Chalice and React in concert, including RDS Postgresql and AWS Cognito

What is This? This repository is an opinonated project template for using aws-cdk, Chalice and React in concert. Where aws-cdk and Chalice are in Pyth

Rasmus Jones 4 Nov 7, 2022
POC de uma AWS lambda que executa a consulta de preços de criptomoedas, e é implantada na AWS usando Github actions.

Cryptocurrency Prices Overview Instalação Repositório Configuração CI/CD Roadmap Testes Overview A ideia deste projeto é aplicar o conteúdo estudado s

Gustavo Santos 3 Aug 31, 2022
Unauthenticated enumeration of services, roles, and users in an AWS account or in every AWS account in existence.

Quiet Riot ?? C'mon, Feel The Noise ?? An enumeration tool for scalable, unauthenticated validation of AWS principals; including AWS Acccount IDs, roo

Wes Ladd 89 Jan 5, 2023
AWS Blog post code for running feature-extraction on images using AWS Batch and Cloud Development Kit (CDK).

Batch processing with AWS Batch and CDK Welcome This repository demostrates provisioning the necessary infrastructure for running a job on AWS Batch u

AWS Samples 7 Oct 18, 2022
Aws-lambda-requests-wrapper - Request/Response wrapper for AWS Lambda with API Gateway

AWS Lambda Requests Wrapper Request/Response wrapper for AWS Lambda with API Gat

null 1 May 20, 2022
AWS-serverless-starter - AWS Lambda serverless stack via Serverless framework

Serverless app via AWS Lambda, ApiGateway and Serverless framework Configuration

 Bəxtiyar 3 Feb 2, 2022