s3-credentials
A tool for creating credentials for accessing S3 buckets
For project background, see s3-credentials: a tool for creating credentials for S3 buckets on my blog.
⚠️
Warning
I am not an AWS security expert. You shoud review how this tool works carefully before using it against with own AWS account.
If you are an AWS security expert I would love to get your feedback!
Installation
Install this tool using pip
:
$ pip install s3-credentials
Configuration
This tool uses boto3 under the hood which supports a number of different ways of providing your AWS credentials. If you have an existing ~/.aws/config
or ~/.aws/credentials
file the tool will use that - otherwise you can set AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables before calling this tool.
Usage
The s3-credentials create
command is the core feature of this tool. Pass it one or more S3 bucket names and it will create a new user with permission to access just those specific buckets, then create access credentials for that user and output them to your console.
Make sure to record the SecretAccessKey
because it will only be displayed once and cannot be recreated later on.
In this example I create credentials for reading and writing files in my static.niche-museums.com
S3 bucket:
% s3-credentials create static.niche-museums.com
Created user: s3.read-write.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.static.niche-museums.com to user s3.read-write.static.niche-museums.com
Created access key for user: s3.read-write.static.niche-museums.com
{
"UserName": "s3.read-write.static.niche-museums.com",
"AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
"Status": "Active",
"SecretAccessKey": "...",
"CreateDate": "2021-11-03 01:38:24+00:00"
}
The command has several additional options:
--username TEXT
: The username to use for the user that is created by the command (or the username of an existing user if you do not want to create a new one). If ommitted a default such ass3.read-write.static.niche-museums.com
will be used.-c, --create-bucket
: Create the buckts if they do not exist. Without this any missing buckets will be treated as an error.--read-only
: The user should only be allowed to read files from the bucket.---write-only
: The user should only be allowed to write files to the bucket, but not read them. This is useful for logging use-cases.--bucket-region
: If creating buckets, the region in which they should be created.--silent
: Don't output details of what is happening, just output the JSON for the created access credentials at the end.--user-permissions-boundary
: Custom permissions boundary to use for users created by this tool. This will default to restricting those users to only interacting with S3, taking the--read-only
option into account. Usenone
to create users without any permissions boundary at all.
Here's the full sequence of events that take place when you run this command:
- Confirm that each of the specified buckets exists. If they do not and
--create-bucket
was passed create them - otherwise exit with an error. - If a username was not specified, determine a username using the
s3.$permission.$buckets
format. - If a user with that username does not exist, create one with an S3 permissions boundary that respects the
--read-only
option - unless--user-permissions-boundary=none
was passed (or a custom permissions boundary string). - For each specified bucket, add an inline IAM policy to the user that gives them permission to either read-only, write-only or read-write against that bucket.
- Create a new access key for that user and output the key and its secret to the console.
Other commands
whoami
To see which user you are authenticated as:
s3-credentials whoami
This will output JSON representing the currently authenticated user.
list-users
To see a list of all users that exist for your AWS account:
s3-credentials list-users
This will return pretty-printed JSON objects by default.
Add --nl
to collapse these to single lines as valid newline-delimited JSON.
Add --array
to output a valid JSON array of objects instead.
list-buckets
Shows a list of all buckets in your AWS account.
s3-credentials list-buckets
Accepts the same --nl
and --array
options as list-users
.
list-user-policies
To see a list of inline policies belonging to users:
% s3-credentials list-user-policies s3.read-write.static.niche-museums.com
User: s3.read-write.static.niche-museums.com
PolicyName: s3.read-write.static.niche-museums.com
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::static.niche-museums.com"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": [
"arn:aws:s3:::static.niche-museums.com/*"
]
}
]
}
You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account:
s3-credentials list-user-policies
delete-user
In trying out this tool it's possible you will create several different user accounts that you later decide to clean up.
Deleting AWS users is a little fiddly: you first need to delete their access keys, then their inline policies and finally the user themselves.
The s3-credentials delete-user
handles this for you:
% s3-credentials delete-user s3.read-write.simonw-test-bucket-10
User: s3.read-write.simonw-test-bucket-10
Deleted policy: s3.read-write.simonw-test-bucket-10
Deleted access key: AKIAWXFXAIOZK3GPEIWR
Deleted user
You can pass it multiple usernames to delete multiple users at a time.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd s3-credentials
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv
:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest