The low-level, core functionality of boto 3.

Overview

botocore

https://secure.travis-ci.org/boto/botocore.png?branch=develop https://codecov.io/github/boto/botocore/coverage.svg?branch=develop

A low-level interface to a growing number of Amazon Web Services. The botocore package is the foundation for the AWS CLI as well as boto3.

On 10/29/2020 deprecation for Python 3.4 and Python 3.5 was announced and support was dropped on 02/01/2021. To avoid disruption, customers using Botocore on Python 3.4 or 3.5 may need to upgrade their version of Python or pin the version of Botocore. For more information, see this blog post.

Getting Started

Assuming that you have Python and virtualenv installed, set up your environment and install the required dependencies like this or you can install the library using pip:

$ git clone https://github.com/boto/botocore.git
$ cd botocore
$ virtualenv venv
...
$ . venv/bin/activate
$ pip install -r requirements.txt
$ pip install -e .
$ pip install botocore

Using Botocore

After installing botocore

Next, set up credentials (in e.g. ~/.aws/credentials):

[default]
aws_access_key_id = YOUR_KEY
aws_secret_access_key = YOUR_SECRET

Then, set up a default region (in e.g. ~/.aws/config):

[default]
region=us-east-1

Other credentials configuration method can be found here

Then, from a Python interpreter:

>>> import botocore.session
>>> session = botocore.session.get_session()
>>> client = session.create_client('ec2')
>>> print(client.describe_instances())

Getting Help

We use GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them. Please use these community resources for getting help. Please note many of the same resources available for boto3 are applicable for botocore:

Contributing

We value feedback and contributions from our community. Whether it's a bug report, new feature, correction, or additional documentation, we welcome your issues and pull requests. Please read through this CONTRIBUTING document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your contribution.

Maintenance and Support for SDK Major Versions

Botocore was made generally available on 06/22/2015 and is currently in the full support phase of the availability life cycle.

For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:

More Resources

Comments
  • Adding persistent caching of temporary credentials with serialization

    Adding persistent caching of temporary credentials with serialization

    Adding persistent caching of temporary credentials from aws/aws-cli@22932e53c7085bd8e53b22904b326c426e2e60fc with serialization

    This PR addresses #1148 by porting the code present in awscli into botocore to make it available to boto3. Having the code in both awscli and botocore until it's removed from awscli doesn't appear to have any negative impact.

    One thing I'd like guidance on is:

    • The RefreshableCredentials class stores the _expiry_time natively in datetime format.
    • The AssumeRoleProvider class expects the possibility that the response['Credentials']['Expiration'] value, which may come from the cache, may need to be parsed from a string back into a datetime
    • In my searches of the awscli code it appears to me that awscli doesn't have any special logic (logic not present in botocore) to ensure that the expiration datetime is serialized to a string before it's passed to JSONFileCache
    • When I use the existing assumerole customization in awscli the expiration datetime is correctly serialized to a string when stored in the JSONFileCache
    • When I use this new functionality that I've moved into botocore in this PR without an explicit serialization call the expiration datetime is passed to JSONFileCache un-serialized (which JSONFileCache rightly rejects). As a result I've set json.dumps to serialize the datetime but I don't understand why this step isn't also required for awscli to work.

    I've implemented the serialization by adding _serialize_if_needed as the default argument to the json.dumps which works but seems to be in opposition to the intent of the JSONFileCache based on the test_only_accepts_json_serializable_data test.

    @jamesls Can you provide any insight on this question about serialization? If there's any way I can remove that default argument to json.dumps which calls _serialize_if_needed and have the expiration datetime serialized through whatever method awscli accomplishes it I'd prefer it.

    Note : I originally submitted PR #1156 with serialization outside of JSONFileCache but I implemented it in a way that didn't work so I closed out that PR.

    To try this PR out you can install it by running

    pip install git+https://github.com/gene1wood/botocore.git@persistent-credential-cache-with-serialization
    
    feature-request pr/needs-review 
    opened by gene1wood 45
  • Clarify multithreading documentation

    Clarify multithreading documentation

    The documentation for boto3 states that:

    It is recommended to create a resource instance for each thread / process in a multithreaded or multiprocess application rather than sharing a single instance among the threads / processes.(emphasis mine)

    The documentation than goes on to show a code example where a session is created per thread, not a resource.

    Reading through previous github issues, I see a note that we should create a separate session per thread. The comment that immediately follows says "resource", however.

    So, do we need one session per thread, or are sessions thread safe, but not resources? Is there a 1:1 mapping between the thread safety of a resource and a client?

    As a bonus question, how expensive / wasteful is it to create new clients (or sessions, as above) on-demand per executor thread...?

    enhancement documentation 
    opened by dfee 34
  • Do not generate 100-continue expectation with no body.

    Do not generate 100-continue expectation with no body.

    HTTP RFC explicitly states that 100-continue should not be set when there is no message body.

    https://tools.ietf.org/html/rfc7231#section-5.1.1

    A client MUST NOT generate a 100-continue expectation in a request that does not include a message body.

    This is a mandatory requirement, this PR Fixes the current implementation behavior by not sending 100-continue when content-length is '0' which means there is no message body

    Fixes boto/boto3#1341

    opened by harshavardhana 33
  • Automatically redirect S3 sigv4 requests sent to the wrong region

    Automatically redirect S3 sigv4 requests sent to the wrong region

    S3 generally provides enough information for us to redirect requests, so this attempts to do so. To prevent a massive number of additional requests, bucket regions are cached.

    When a redirect occurs, a warning is printed that tells the customer they should use a client configured to the proper region to avoid additional requests.

    pr/ready-to-merge 
    opened by JordonPhillips 33
  • 'AWSHTTPSConnection' object has no attribute 'ssl_context'

    'AWSHTTPSConnection' object has no attribute 'ssl_context'

    Hi,

    I'm getting the above error when trying to use Route53 service. I have investigated and concluded that the fault is the copying of functions from AWSHTTPConnection to AWSHTTPSConnection:

    # Now we need to set the methods we overrode from AWSHTTPConnection
    # onto AWSHTTPSConnection.  This is just a shortcut to avoid
    # copy/pasting the same code into AWSHTTPSConnection.
    for name, function in AWSHTTPConnection.__dict__.items():
        if inspect.isfunction(function):
            setattr(AWSHTTPSConnection, name, function)
    

    The problem is that this ends up overriding the __init__ function too, so the VerifiedHTTPSConnection constructor is never invoked. I don't know why this worked in the past, or why it has become a problem recently. I have locally patched that file to add and function.__name__ != '__init__' to the if above, and copied manually the constructor to AWSHTTPSConnection (to invoke VerifiedHTTPSConnection constructor). Things seem to work fine now.

    enhancement dependencies 
    opened by fsateler 32
  • DeprecationWarnings in vendored requests

    DeprecationWarnings in vendored requests

    The vendored requests library is emitting DeprecationWarnings:

    lib/python3.7/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
        from collections import Mapping, MutableMapping
    

    This has been fixed upstream here: https://github.com/requests/requests/pull/4501

    Since I do not know how you handle the vendored requests, I have no idea how to make a PR to fix it. sorry.

    Can someone pick this up? It's very annoying while running tests. Or give me a hint how to create a PR for the vendored requests, and I'll create one.

    enhancement dependencies 
    opened by mvanbaak 30
  • ClientError subclass factory

    ClientError subclass factory

    See also: https://github.com/boto/boto3/issues/167, https://github.com/boto/boto3/issues/361

    I'm using this PR to start a discussion about subclassing ClientError. Is there a plan for how to implement this? (I couldn't find any relevant notes in https://github.com/boto/boto3/issues/167.) The implementation here allows more useful error processing patterns in ClientError consumers, like:

    import boto3
    from botocore.errorfactory import NoSuchEntity
    
    iam = boto3.resource("iam")
    
    try:
        iam.Role("fakeRole").load()
    except NoSuchEntity as e:
        print(e)
    

    There are a few caveats, including the fact that the error factory has no ability to sanity check error names. I can add docs if I receive feedback that I'm on the right track.

    opened by kislyuk 27
  • Deprecate Usage of `sslComonName` in Endpoint Creation

    Deprecate Usage of `sslComonName` in Endpoint Creation

    The Problem

    Currently when creating a service client, an sslCommonName attribute may be used for endpoint construction in unique cases. The format of sslCommonName is typically {region}.{service}.{dnsSuffix}, as opposed to the more common {service}.{region}.{dnsSuffix}. This usage originated from a time where Python versions (<2.7) didn't supply an SSL module, requiring specific certificate formats.

    Now that the library only support Python 3.7+, we'll be deprecating the usage of sslCommonName to standardize Boto3 with all other AWS SDKs. This will also resolve long running issues of services such as SQS and GuardDuty being incompatible with certain VPC endpoint configurations.

    Required Actions

    In the immediate term, we will start raising a deprecation warning when sslCommonName is used. This is to alert customers of the upcoming change and provide time to make any required changes.

    For most users, this will not require any changes. The URL will automatically update when the next minor version (1.29.0) is released, and clients will continue to operate the same. For any users with strict network rules, explicitly allow listing domains, you will need to add support for {service}.{region}.{dnsSuffix} as demonstrated below:

    Old Format: https://us-west-2.sqs.amazonaws.com New Format: https://sqs.us-west-2.amazonaws.com

    Warning Mitigation Strategy

    1. If you wish to ensure that your application does not use sslCommonName now or test the impending deprecation, we have created a new environment variable BOTO_DISABLE_COMMONNAME. Setting this to true will suppress the warning and convert to the new hostname format.
    2. If you are concerned about this change causing disruptions, you can pin your version of botocore to <1.29.0 until you are ready to migrate.
    3. If you are only concerned about silencing the warning in your logs, use warnings.filterwarnings when instantiating a new service client.
    import warnings
    warnings.filterwarnings('ignore', category=FutureWarning, module='botocore.client')
    

    Other Information

    Endpoint Docs: https://docs.aws.amazon.com/general/latest/gr/rande.html Related Issues: https://github.com/boto/botocore/issues/2376, https://github.com/boto/boto3/issues/1900, https://github.com/boto/boto3/issues/3311, https://github.com/boto/botocore/issues/2683

    breaking-change feature-request needs-discussion endpoints p2 
    opened by dlm6693 26
  • fixes #926 - expose retry information in exceptions

    fixes #926 - expose retry information in exceptions

    This is my attempt at fixing #926 by providing information on retries in ResponseMetadata, which is returned as part of the exceptions that are raised to the user.

    • in botocore.retryhandler.MaxAttemptsDecorator.call(), add a MaxAttemptsReached=True element to ResponseMetadata if the maximum number of attempts is reached
    • in botocore.endpoint.Endpoint._send_request(), add a NumAttempts element to ResponseMetadata, containing the attempt number

    An example of this in action:

    #!/usr/bin/env python
    
    import boto3
    from mock import patch, Mock
    from botocore.vendored.requests.models import Response
    
    ec2 = boto3.resource('ec2')
    i = ec2.Instance('i-0')
    
    def mock_get_response(self, request, operation_model, attempts):
        headers = {
            'transfer-encoding': 'chunked',
            'date': 'Thu, 23 Jun 2016 11:32:42 GMT',
            'connection': 'close',
            'server': 'AmazonEC2'
        }
        mock_resp = Mock(spec_set=Response)
        type(mock_resp).content = '<?xml version="1.0" encoding="UTF-8"?>\n<Response><Errors><Error><Type>Sender</Type><Code>Throttling</Code><Message>Rate exceeded</Message></Error></Errors><RequestID>44c0f570-e338-48dd-9953-6684fa586dcb</RequestID></Response>'
        type(mock_resp).headers = headers
        type(mock_resp).status_code = 400
        parsed = {
            'ResponseMetadata': {
                'HTTPStatusCode': 400,
                'RequestId': '44c0f570-e338-48dd-9953-6684fa586dcb',
                'HTTPHeaders': headers
            },
            'Error': {
                'Message': "Rate exceeded",
                'Code': 'Throttling'
            }
        }
        return (mock_resp, parsed), None
    
    
    with patch('botocore.endpoint.Endpoint._get_response', side_effect=mock_get_response, autospec=True):
        try:
            x = i.hypervisor
        except Exception as ex:
            print(ex.response)
    
    (botocore)jantman@phoenix:pts/14:~/GIT/botocore (issue926_retry_info %=)$ time python ~/tmp/test.py 
    {'ResponseMetadata': {'NumAttempts': 5, 'HTTPStatusCode': 400, 'MaxAttemptsReached': True, 'RequestId': '44c0f570-e338-48dd-9953-6684fa586dcb', 'HTTPHeaders': {'transfer-encoding': 'chunked', 'date': 'Thu, 23 Jun 2016 11:32:42 GMT', 'connection': 'close', 'server': 'AmazonEC2'}}, 'Error': {'Message': 'Rate exceeded', 'Code': 'Throttling'}}
    
    real    0m13.479s
    user    0m0.340s
    sys     0m0.043s
    
    pr/needs-review needs-discussion 
    opened by jantman 24
  • Connection pool is full, discarding connection

    Connection pool is full, discarding connection

    I'm getting a massive amount of these exceptions while trying to retrieve S3 logs, as I'm retrieving each object in separate threads.

    WARNING:botocore.vendored.requests.packages.urllib3.connectionpool:Connection pool is full, discarding connection: bucket.s3.amazonaws.com

    Should I be closing connections after using them? Or is this in boto's hands?

    thanks

    closing-soon 
    opened by code-tree 24
  • Add line iterator to StreamingBody

    Add line iterator to StreamingBody

    This way, we can readlines from a streaming body (without having to load all bytes into memory!)

    This will allow us to use StreamingBody in places where a Python file-like object is expected. (E.g. csv.reader!)

    enhancement incorporating-feedback 
    opened by sujaymansingh 23
  • "EOF occurred in violation of protocol" occurred when calling s3:PutObject with "--checksum-algorithm" in Python Docker image in Amazon Linux 2

    Describe the bug

    Error occurred when calling s3:PutObject with "--checksum-algorithm" for a 100 Kib file in the Python 3.11 image in the Amazon Linux 2 EC2 instance.

    Expected Behavior

    The file can be uploaded to the specified S3 bucket with checksum-algorithm enabled.

    Current Behavior

    Receive the following error, and the file fails to be uploaded.

    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
        httplib_response = self._make_request(
                           ^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 396, in _make_request
        conn.request_chunked(method, url, **httplib_request_kw)
      File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 275, in request_chunked
        self.send(to_send)
      File "/usr/local/lib/python3.11/site-packages/botocore/awsrequest.py", line 218, in send
        return super().send(str)
               ^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/http/client.py", line 998, in send
        self.sock.sendall(data)
      File "/usr/local/lib/python3.11/ssl.py", line 1241, in sendall
        v = self.send(byte_view[count:])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/ssl.py", line 1210, in send
        return self._sslobj.write(data)
               ^^^^^^^^^^^^^^^^^^^^^^^^
    ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2393)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/botocore/httpsession.py", line 455, in send
        urllib_response = conn.urlopen(
                          ^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen
        retries = retries.increment(
                  ^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 525, in increment
        raise six.reraise(type(error), error, _stacktrace)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/urllib3/packages/six.py", line 769, in reraise
        raise value.with_traceback(tb)
      File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
        httplib_response = self._make_request(
                           ^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 396, in _make_request
        conn.request_chunked(method, url, **httplib_request_kw)
      File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 275, in request_chunked
        self.send(to_send)
      File "/usr/local/lib/python3.11/site-packages/botocore/awsrequest.py", line 218, in send
        return super().send(str)
               ^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/http/client.py", line 998, in send
        self.sock.sendall(data)
      File "/usr/local/lib/python3.11/ssl.py", line 1241, in sendall
        v = self.send(byte_view[count:])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/ssl.py", line 1210, in send
        return self._sslobj.write(data)
               ^^^^^^^^^^^^^^^^^^^^^^^^
    urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:2393)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/botocore/endpoint.py", line 281, in _do_get_response
        http_response = self._send(request)
                        ^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/botocore/endpoint.py", line 377, in _send
        return self.http_session.send(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/botocore/httpsession.py", line 482, in send
        raise SSLError(endpoint_url=request.url, error=e)
    botocore.exceptions.SSLError: SSL validation failed for https://test-bucket-name.s3.amazonaws.com/100k.txt EOF occurred in violation of protocol (_ssl.c:2393)
    

    Reproduction Steps

    1. Launch an Amazon Linux 2 EC2 attached with a role that can put-object to S3 buckets

      $ uname -r
      5.10.157-139.675.amzn2.x86_64
      
    2. Install Docker with sudo yum install docker

       $ docker --version
       Docker version 20.10.17, build 100c701
      
    3. Launch the "python:3.11" image.

    4. Install awscli with sudo pip install awscli

      $ aws --version
      aws-cli/1.27.42 Python/3.11.1 Linux/5.10.157-139.675.amzn2.x86_64 botocore/1.29.42
      
    5. Create a 100 KiB file with fallocate -l 100K 100k.txt

    6. Upload the file to a S3 bucket with checksum-algorithm enabled.

      aws --debug s3api put-object --bucket test-bucket-name --key 100k.txt --body ./100k.txt --checksum-algorithm SHA256
      

    Possible Solution

    No response

    Additional Information/Context

    • The same error occurs when using boto3 in the same environment.
    • This error does not occur outside the docker image.
    • This error does not occur in the same Docker image in Ubuntu 20.04.5 LTS
    • This error does not occur when the parameter ChecksumAlgorithm is not given.
    • This error does not occur when the file is only 1 KiB.

    SDK version used

    aws-cli/1.27.42 Python/3.11.1 Linux/5.10.157-139.675.amzn2.x86_64 botocore/1.29.42

    Environment details (OS name and version, etc.)

    5.10.157-139.675.amzn2.x86_64

    s3 
    opened by lcy0321 2
  • Vulneribility detected

    Vulneribility detected

    Describe the bug

    We are currently using boto3 version 1.26.30. We are using NexusIq which scans all our dependencies and at the moment it fails for botocore 1.29.30 with a high security warning

    Vulneritiblity CVE-2022-23491

    Expected Behavior

    CVE-2022-23491 is not detected anymore

    Current Behavior

    CVE-2022-23491 detected

    Reproduction Steps

    CVE-2022-23491

    Possible Solution

    I'm not an expert here, but it could probably solved with updating certif: https://github.com/certifi/python-certifi/security/advisories/GHSA-43fp-rhv2-5gv8

    Additional Information/Context

    No response

    SDK version used

    1.26.30

    Environment details (OS name and version, etc.)

    Linux

    needs-review third-party certs 
    opened by haassto 9
  • sso_session in config should maybe disallow sso_start_url and sso_region

    sso_session in config should maybe disallow sso_start_url and sso_region

    Describe the feature

    The new support for sso_session in config is great! But it's allowed to create a profile that has both sso_session and sso_start_url and sso_region. This is causing an issue here https://github.com/benkehoe/aws-sso-util/issues/83 but more generally, it allows an ambiguous profile to be created:

    [profile my-profile]
    sso_start_url = https://foo.awsapps.com/start
    sso_region = us-east-1
    sso_session = my-session
    sso_account_id = 123456789012
    sso_role_name = MyRole
    
    [sso-session my-session]
    sso_start_url = https://bar.awsapps.com/start
    sso_region = us-west-2
    

    with SDKs before session support, this will grab a token from the cache for https://foo.awsapps.com/start and with SDKs after, it'll use https://bar.awsapps.com/start (via the cache entry for the session). While this is generally accepted for different types of credential configuration (and in https://github.com/aws/aws-sdk-go/issues/3763 I specifically asked for SSO + credential process config to be allowed), allowing it for the same type of credential configuration seems weird to me, and possibly a source for errors.

    I can see no good reason someone would want to have both inline and session-based SSO config in the same profile. Maybe it'd be better if it caused an error to keep people from expressing config that they don't intend.

    Acknowledgements

    • [ ] I may be able to implement this feature request
    • [ ] This feature might incur a breaking change

    SDK version used

    1.29.10 and later

    Environment details (OS name and version, etc.)

    N/A

    feature-request configuration sso 
    opened by benkehoe 3
  • Incorrect RAM endpoints for FIPS in us-gov regions

    Incorrect RAM endpoints for FIPS in us-gov regions

    Describe the bug

    The FIPS endpoints for RAM in us-gov regions is invalid in: https://github.com/boto/botocore/blob/master/botocore/data/ram/2018-01-04/endpoint-rule-set-1.json

    It points to ram-fips.us-gov-west-1.amazonaws.com which does not exist.

    According to https://aws.amazon.com/compliance/fips/, the default endpoint of ram.us-gov-west-1.amazonaws.com already has FIPS, so ram-fips does not exist.

    Expected Behavior

    When enabling FIPS with AWS_USE_FIPS_ENDPOINT=true or use_fips_endpoint=True, the SDK uses the correct endpoint.

    Current Behavior

    AWS_REGION=us-gov-west-1 AWS_USE_FIPS_ENDPOINT=true aws ram list-resources --resource-owner 111111111111
    
    Could not connect to the endpoint URL: "https://ram-fips.us-gov-west-1.amazonaws.com/listresources"
    

    Reproduction Steps

    1. Have an account/profile in a us-gov region.
    2. Then execute:
    AWS_REGION=us-gov-west-1 AWS_USE_FIPS_ENDPOINT=true aws ram list-resources --resource-owner 111111111111
    

    Possible Solution

    Options are:

    1. Update botocore logic in https://github.com/boto/botocore/blob/master/botocore/data/ram/2018-01-04/endpoint-rule-set-1.json to know that the us-gov regions use ram. not ram-fips..
    2. Ideally, this would be fixed upstream at the AWS RAM service, by them providing the ram-fips name in addition to ram. That would eliminate complex human managed logic and lookups. If the naming is consistent, then everything can work by convention.
      1. But until then, the SDK must adapt to what is provided.

    Additional Information/Context

    No response

    SDK version used

    1.29.27

    Environment details (OS name and version, etc.)

    Seen on MacOS and Linux of various versions

    bug endpoints p2 
    opened by seanorama 1
  • Optionally allow HTTP redirects

    Optionally allow HTTP redirects

    This adds (optional, client configured) support for the HTTP "location" header when processing HTTP redirects for S3 responses.

    I'm a developer at NVIDIA; I wrote this on behalf of the AIStore project. We'd really like to use botocore (and boto3) for client access; AIStore (and some other systems - like Apache Ozone, referenced by this issue) rely on standard HTTP redirects - using the "location" header - for load balancing.

    botocore constructs redirection URLs using the region instead, and won't allow redirections outside of known Amazon URIs.

    I've gated the change here behind a config setting; that said, other AWS SDKs (aws-sdk-go, for instance) do appear to support redirects as normal, so hopefully this behaviour is in line with other clients.

    This change -

    • Introduces allow_http_redirects config option (default: False).
    • If set to True, performs HTTP redirection based on the Location header
    • Limits the number of redirections for a given request to three (I didn't feel adding another config option was worth it in this case).
    • Closes #2571.
    import boto3
    from botocore.config import Config as Config
    
    config = Config(allow_http_redirects=True)
    session = boto3.Session()
    s3 = session.resource("s3", endpoint_url="http://127.0.0.1:8080/s3", config=config)
    

    I've added unit and functional tests and tried to keep the style consistent with your existing work. If there's any demand for caching using the Cache-Control or Expires headers, I could certainly do it in the follow-up.

    Thanks.

    opened by simontraill 0
Owner
the boto project
the boto project
Ajenti Core and stock plugins

Ajenti is a Linux & BSD modular server admin panel. Ajenti 2 provides a new interface and a better architecture, developed with Python3 and AngularJS.

Ajenti Project 7k Jan 3, 2023
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 8, 2023
This repository holds those infrastructure-level modules, that every application requires that follows the core 12-factor principles.

py-12f-common About This repository holds those infrastructure-level modules, that every application requires that follows the core 12-factor principl

Tamás Benke 1 Dec 15, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

null 77 Dec 24, 2022
Official Python low-level client for Elasticsearch

Python Elasticsearch Client Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Py

elastic 3.8k Jan 1, 2023
This is an online course where you can learn and master the skill of low-level performance analysis and tuning.

Performance Ninja Class This is an online course where you can learn to find and fix low-level performance issues, for example CPU cache misses and br

Denis Bakhvalov 1.2k Dec 30, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 3, 2023
Creating low-level foundations and abstractions for asynchronous programming in Python.

DIY Async I/O Creating low-level foundations and abstractions for asynchronous programming in Python (i.e., implementing concurrency without using thr

Doc Jones 4 Dec 11, 2021
Control System Packer is a lightweight, low-level program to transform energy equations into the compact libraries for control systems.

Control System Packer is a lightweight, low-level program to transform energy equations into the compact libraries for control systems. Packer supports Python ?? , C ?? and C++ ?? libraries.

mirnanoukari 31 Sep 15, 2022
Low-level Python CFFI Bindings for Argon2

Low-level Python CFFI Bindings for Argon2 argon2-cffi-bindings provides low-level CFFI bindings to the Argon2 password hashing algorithm including a v

Hynek Schlawack 4 Dec 15, 2022
A robust, low-level connector for the Discord API

Bauxite Bauxite is a robust, low-level connector for the Discord API. What is Bauxite for? Bauxite is made for two main purposes: Creating higher-leve

null 1 Dec 4, 2021
A library which implements low-level functions that relate to packaging and distribution of Python

What is it? Distlib is a library which implements low-level functions that relate to packaging and distribution of Python software. It is intended to

Python Packaging Authority 29 Oct 11, 2022
Low-level, feature rich and easy to use discord python wrapper

PWRCord Low-level, feature rich and easy to use discord python wrapper Important Note: At this point, this library API is considered unstable and can

MIguel Lopes 1 Dec 26, 2021
Llvlir - Low Level Variable Length Intermediate Representation

Low Level Variable Length Intermediate Representation Low Level Variable Length

Michael Clark 2 Jan 24, 2022
A general purpose low level programming language written in Python.

A general purpose low level programming language written in Python. Basal is an easy mid level programming language compiling to C. It has an easy syntax, similar to Python, Rust etc.

Snm Logic 6 Mar 30, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022