Credentials
***********
Overview
========
Boto3 credentials can be configured in multiple ways. Regardless of
the source or sources that you choose, you *must* have both AWS
credentials and an AWS Region set in order to make requests.
Interactive configuration
=========================
If you have the AWS CLI, then you can use its interactive "configure"
command to set up your credentials and default region:
aws configure
Follow the prompts and it will generate configuration files in the
correct locations for you.
Configuring credentials
=======================
There are two types of configuration data in Boto3: credentials and
non-credentials. Credentials include items such as
"aws_access_key_id", "aws_secret_access_key", and "aws_session_token".
Non-credential configuration includes items such as which region to
use or which addressing style to use for Amazon S3. For more
information on how to configure non-credential configurations, see the
Configuration guide.
Boto3 will look in several locations when searching for credentials.
The mechanism in which Boto3 looks for credentials is to search
through a list of possible locations and stop as soon as it finds
credentials. The order in which Boto3 searches for credentials is:
1. Passing credentials as parameters in the "boto3.client()" method
2. Passing credentials as parameters when creating a "Session" object
3. Environment variables
4. Assume role provider
5. Assume role with web identity provider
6. AWS IAM Identity Center credential provider
7. Shared credential file ("~/.aws/credentials")
8. AWS config file ("~/.aws/config")
9. Boto2 config file ("/etc/boto.cfg" and "~/.boto")
10. Container credential provider
11. Instance metadata service on an Amazon EC2 instance that has an
IAM role configured.
Each of those locations is discussed in more detail below.
Passing credentials as parameters
=================================
There are valid use cases for providing credentials to the "client()"
method and "Session" object, these include:
* Retrieving temporary credentials using AWS STS (such as
"sts.get_session_token()").
* Loading credentials from some external location, e.g the OS
keychain.
The first option for providing credentials to Boto3 is passing them as
parameters when creating clients:
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN
)
The second option for providing credentials to Boto3 is passing them
as parameters when creating a "Session" object:
import boto3
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN
)
Warning:
"ACCESS_KEY", "SECRET_KEY", and "SESSION_TOKEN" are variables that
contain your access key, secret key, and optional session token.
Note that the examples above do not have hard coded credentials. We
do **not** recommend hard coding credentials in your source code.
Environment variables
=====================
Boto3 will check these environment variables for credentials:
* "AWS_ACCESS_KEY_ID" - The access key for your AWS account.
* "AWS_SECRET_ACCESS_KEY" - The secret key for your AWS account.
* "AWS_SESSION_TOKEN" - The session key for your AWS account. This is
only needed when you are using temporary credentials. The
"AWS_SECURITY_TOKEN" environment variable can also be used, but is
only supported for backwards compatibility purposes.
"AWS_SESSION_TOKEN" is supported by multiple AWS SDKs besides
python.
Assume role provider
====================
Note:
This is a different set of credentials configuration than using IAM
roles for EC2 instances, which is discussed in a section below.
Within the "~/.aws/config" file, you can also configure a profile to
indicate that Boto3 should assume a role. When you do this, Boto3 will
automatically make the corresponding AssumeRole calls to AWS STS on
your behalf. It will handle in-memory caching as well as refreshing
credentials as needed.
You can specify the following configuration values for configuring an
IAM role in Boto3. For more information about a particular setting,
see the Configuration section.
* "role_arn" - The ARN of the role you want to assume.
* "source_profile" - The boto3 profile that contains credentials we
should use for the initial AssumeRole call.
* "credential_source" - The resource (Amazon EC2 instance profile,
Amazon ECS container role, or environment variable) that contains
the credentials to use for the initial AssumeRole call.
* "external_id" - A unique identifier that is used by third parties to
assume a role in their customers' accounts. This maps to the
"ExternalId" parameter in the AssumeRole operation. This is an
optional parameter.
* "mfa_serial" - The identification number of the MFA device to use
when assuming a role. This is an optional parameter. Specify this
value if the trust policy of the role being assumed includes a
condition that requires MFA authentication. The value is either the
serial number for a hardware device (such as GAHT12345678) or an
Amazon Resource Name (ARN) for a virtual device (such as
*arn:aws:iam::123456789012:mfa/user*).
* "role_session_name" - The name applied to this assume-role session.
This value affects the assumed role user ARN (such as
*arn:aws:sts::123456789012:assumed-
role/role_name/role_session_name*). This maps to the RoleSessionName
parameter in the AssumeRole operation. This is an optional
parameter. If you do not provide this value, a session name will be
automatically generated.
* "duration_seconds" - The length of time in seconds of the role
session.
If MFA authentication is not enabled then you only need to specify a
"role_arn" and a "source_profile".
When you specify a profile that has an IAM role configuration, Boto3
will make an "AssumeRole" call to retrieve temporary credentials.
Subsequent Boto3 API calls will use the cached temporary credentials
until they expire, in which case Boto3 will then automatically refresh
the credentials.
Please note that Boto3 does not write these temporary credentials to
disk. This means that temporary credentials from the "AssumeRole"
calls are only cached in-memory within a single session. All clients
created from that session will share the same temporary credentials.
If you specify "mfa_serial", then the first time an "AssumeRole" call
is made, you will be prompted to enter the MFA code. **Program
execution will block until you enter the MFA code.** You'll need to
keep this in mind if you have an "mfa_serial" device configured, but
would like to use Boto3 in an automated script.
Below is an example configuration for the minimal amount of
configuration needed to configure an assume role profile:
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_access_key_id=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
See Using IAM Roles for general information on IAM roles.
Assume Role With Web Identity Provider
======================================
Within the "~/.aws/config" file, you can also configure a profile to
indicate that Boto3 should assume a role. When you do this, Boto3 will
automatically make the corresponding "AssumeRoleWithWebIdentity" calls
to AWS STS on your behalf. It will handle in-memory caching as well as
refreshing credentials, as needed.
You can specify the following configuration values for configuring an
IAM role in Boto3:
* "role_arn" - The ARN of the role you want to assume.
* "web_identity_token_file" - The path to a file which contains an
OAuth 2.0 access token or OpenID Connect ID token that is provided
by the identity provider. The contents of this file will be loaded
and passed as the "WebIdentityToken" argument to the
"AssumeRoleWithWebIdentity" operation.
* "role_session_name" - The name applied to this assume-role session.
This value affects the assumed role user ARN (such as
*arn:aws:sts::123456789012:assumed-
role/role_name/role_session_name*). This maps to the
"RoleSessionName" parameter in the "AssumeRoleWithWebIdentity"
operation. This is an optional parameter. If you do not provide this
value, a session name will be automatically generated.
Below is an example configuration for the minimal amount of
configuration needed to configure an assume role with web identity
profile:
# In ~/.aws/config
[profile web-identity]
role_arn=arn:aws:iam:...
web_identity_token_file=/path/to/a/token
This provider can also be configured via environment variables:
* "AWS_ROLE_ARN" - The ARN of the role you want to assume.
* "AWS_WEB_IDENTITY_TOKEN_FILE" - The path to the web identity token
file.
* "AWS_ROLE_SESSION_NAME" - The name applied to this assume-role
session.
Note:
These environment variables currently only apply to the assume role
with web identity provider and do not apply to the general assume
role provider configuration.
AWS IAM Identity Center
=======================
Support for the AWS IAM Identity Center (successor to AWS Single Sign-
On) credential provider was added in 1.14.0. The IAM Identity Center
provides support for single sign-on (SSO) credentials.
To begin using the IAM Identity Center credential provider, start by
using the AWS CLI (v2) to configure and manage your SSO profiles and
login sessions. For detailed instructions on the configuration and
login process see the AWS CLI User Guide for SSO. Once completed you
will have one or many profiles in the shared configuration file with
the following settings:
# In ~/.aws/config
[profile my-sso-profile]
sso_start_url = https://my-sso-portal.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789011
sso_role_name = readOnly
* "sso_start_url" - The URL that points to the organization's IAM
Identity Center user portal.
* "sso_region" - The AWS Region that contains the IAM Identity Center
portal host. This is separate from the default AWS CLI Region
parameter, and can also be a different Region.
* "sso_account_id" - The AWS account ID that contains the IAM role
that you want to use with this profile.
* "sso_role_name" - The name of the IAM role that defines the user's
permissions when using this profile.
You can then specify the profile name via the "AWS_PROFILE"
environment variable or the "profile_name" argument when creating a
"Session". For example, we can create a Session using the "my-sso-
profile" profile and any clients created from this session will use
the "my-sso-profile" credentials:
import boto3
session = boto3.Session(profile_name='my-sso-profile')
s3_client = session.client('s3')
Shared credentials file
=======================
The shared credentials file has a default location of
"~/.aws/credentials". You can change the location of the shared
credentials file by setting the "AWS_SHARED_CREDENTIALS_FILE"
environment variable.
This file is an INI formatted file with section names corresponding to
profiles. With each section, the three configuration variables shown
above can be specified: "aws_access_key_id", "aws_secret_access_key",
"aws_session_token". *These are the only supported values in the
shared credential file.*
Below is a minimal example of the shared credentials file:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
aws_session_token=baz
The shared credentials file also supports the concept of profiles.
Profiles represent logical groups of configuration. The shared
credential file can have multiple profiles:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
[dev]
aws_access_key_id=foo2
aws_secret_access_key=bar2
[prod]
aws_access_key_id=foo3
aws_secret_access_key=bar3
You can then specify a profile name via the "AWS_PROFILE" environment
variable or the "profile_name" argument when creating a "Session". For
example, we can create a Session using the “dev” profile and any
clients created from this session will use the “dev” credentials:
import boto3
session = boto3.Session(profile_name='dev')
dev_s3_client = session.client('s3')
AWS config file
===============
Boto3 can also load credentials from "~/.aws/config". You can change
this default location by setting the "AWS_CONFIG_FILE" environment
variable. The config file is an INI format, with the same keys
supported by the shared credentials file. The only difference is that
profile sections *must* have the format of "[profile profile-name]",
except for the default profile:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
[profile dev]
aws_access_key_id=foo2
aws_secret_access_key=bar2
[profile prod]
aws_access_key_id=foo3
aws_secret_access_key=bar3
The reason that section names must start with profile in the
"~/.aws/config" file is because there are other sections in this file
that are permitted that aren't profile configurations.
Boto2 configuration file support
================================
Boto3 will attempt to load credentials from the Boto2 config file. It
first checks the file pointed to by "BOTO_CONFIG" if set, otherwise it
will check "/etc/boto.cfg" and "~/.boto". Note that only the
"[Credentials]" section of the boto config file is used. All other
configuration data in the boto config file is ignored.
# Example ~/.boto file
[Credentials]
aws_access_key_id = foo
aws_secret_access_key = bar
Note:
This credential provider is primarily for backwards compatibility
purposes with Boto2.
Container credential provider
=============================
If you are using Amazon Elastic Container Service (Amazon ECS) or
Amazon Elastic Kubernetes Service (Amazon EKS), you can obtain
credentials by specifying an HTTP endpoint as an environment variable.
The SDK will request credentials from the specified endpoint. For
more information, see Container credential provider in the Amazon SDKs
and Tools Reference Guide.
IAM roles
=========
If you are running on Amazon EC2 and no credentials have been found by
any of the providers above, Boto3 will try to load credentials from
the instance metadata service. In order to take advantage of this
feature, you must have specified an IAM role to use when you launched
your EC2 instance.
For more information on how to configure IAM roles on EC2 instances,
see the IAM Roles for Amazon EC2 guide.
Note that if you've launched an EC2 instance with an IAM role
configured, there's no explicit configuration you need to set in Boto3
to use these credentials. Boto3 will automatically use IAM role
credentials if it does not find credentials in any of the other places
listed previously.
Best practices for configuring credentials
==========================================
If you're running on an EC2 instance, use AWS IAM roles. See the IAM
Roles for Amazon EC2 guide for more information on how to set this up.
If you want to interoperate with multiple AWS SDKs (e.g Java,
JavaScript, Ruby, PHP, .NET, AWS CLI, Go, C++), use the shared
credentials file ("~/.aws/credentials"). By using the shared
credentials file, you can use a single file for credentials that will
work in all AWS SDKs.
Using queues in Amazon SQS
**************************
This Python example shows you how to:
* Get a list of all of your message queues
* Obtain the URL for a particular queue
* Create and delete queues
The scenario
============
In this example, Python code is used to work with queues. The code
uses the AWS SDK for Python to use queues using these methods of the
AWS.SQS client class:
* list_queues.
* create_queue.
* get_queue_url.
* delete_queue.
For more information about Amazon SQS messages, see How Queues Work in
the *Amazon Simple Queue Service Developer Guide*.
List your queues
================
The example below shows how to:
* List queues using list_queues.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# List SQS queues
response = sqs.list_queues()
print(response['QueueUrls'])
Create a queue
==============
The example below shows how to:
* Create a queue using create_queue.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# Create a SQS queue
response = sqs.create_queue(
QueueName='SQS_QUEUE_NAME',
Attributes={
'DelaySeconds': '60',
'MessageRetentionPeriod': '86400'
}
)
print(response['QueueUrl'])
Get the URL for a queue
=======================
The example below shows how to:
* Get the URL for a queue using get_queue_url.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# Get URL for SQS queue
response = sqs.get_queue_url(QueueName='SQS_QUEUE_NAME')
print(response['QueueUrl'])
Delete a queue
==============
The example below shows how to:
* Delete a queue using delete_queue.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# Delete SQS queue
sqs.delete_queue(QueueUrl='SQS_QUEUE_URL')
AWS Identity and Access Management examples
*******************************************
AWS Identity and Access Management (IAM) is a web service that enables
Amazon Web Services (AWS) customers to manage users and user
permissions in AWS. The service is targeted at organizations with
multiple users or systems in the cloud that use AWS products such as
Amazon EC2, Amazon SimpleDB, and the AWS Management Console. With IAM,
you can centrally manage users, security credentials such as access
keys, and permissions that control which AWS resources users can
access.
You can use the following examples to access AWS Identity and Access
Management (IAM) using the Amazon Web Services (AWS) SDK for Python.
For more information about IAM, see the IAM documentation.
**Examples**
* Managing IAM users
* Working with IAM policies
* Managing IAM access keys
* Working with IAM server certificates
* Managing IAM account aliases
Paginators
**********
Some AWS operations return results that are incomplete and require
subsequent requests in order to attain the entire result set. The
process of sending subsequent requests to continue where a previous
request left off is called *pagination*. For example, the
"list_objects" operation of Amazon S3 returns up to 1000 objects at a
time, and you must send subsequent requests with the appropriate
"Marker" in order to retrieve the next *page* of results.
*Paginators* are a feature of boto3 that act as an abstraction over
the process of iterating over an entire result set of a truncated API
operation.
Creating paginators
===================
Paginators are created via the "get_paginator()" method of a boto3
client. The "get_paginator()" method accepts an operation name and
returns a reusable "Paginator" object. You then call the "paginate"
method of the Paginator, passing in any relevant operation parameters
to apply to the underlying API operation. The "paginate" method then
returns an iterable "PageIterator":
import boto3
# Create a client
client = boto3.client('s3', region_name='us-west-2')
# Create a reusable Paginator
paginator = client.get_paginator('list_objects_v2')
# Create a PageIterator from the Paginator
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket')
for page in page_iterator:
print(page['Contents'])
Customizing page iterators
--------------------------
You must call the "paginate" method of a Paginator in order to iterate
over the pages of API operation results. The "paginate" method accepts
a "PaginationConfig" named argument that can be used to customize the
pagination:
paginator = client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket',
PaginationConfig={'MaxItems': 10})
"MaxItems"
Limits the maximum number of total returned items returned while
paginating.
"StartingToken"
Can be used to modify the starting marker or token of a paginator.
This argument if useful for resuming pagination from a previous
token or starting pagination at a known position.
"PageSize"
Controls the number of items returned per page of each result.
Note:
Services may choose to return more or fewer items than specified
in the "PageSize" argument depending on the service, the
operation, or the resource you are paginating.
Filtering results
=================
Many Paginators can be filtered server-side with options that are
passed through to each underlying API call. For example,
"S3.Paginator.list_objects.paginate()" accepts a "Prefix" parameter
used to filter the paginated results by prefix server-side before
sending them to the client:
import boto3
client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
operation_parameters = {'Bucket': 'amzn-s3-demo-bucket',
'Prefix': 'foo/baz'}
page_iterator = paginator.paginate(**operation_parameters)
for page in page_iterator:
print(page['Contents'])
Filtering results with JMESPath
-------------------------------
JMESPath is a query language for JSON that can be used directly on
paginated results. You can filter results client-side using JMESPath
expressions that are applied to each page of results through the
"search" method of a "PageIterator".
import boto3
client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket='amzn-s3-demo-bucket')
filtered_iterator = page_iterator.search("Contents[?Size > `100`][]")
for key_data in filtered_iterator:
print(key_data)
When filtering with JMESPath expressions, each page of results that is
yielded by the paginator is mapped through the JMESPath expression. If
a JMESPath expression returns a single value that is not an array,
that value is yielded directly. If the result of applying the JMESPath
expression to a page of results is a list, then each value of the list
is yielded individually (essentially implementing a flat map). For
example, in the above expression, each key that has a "Size" greater
than *100* is yielded by the "filtered_iterator".
Session
*******
Overview
========
A session manages state about a particular configuration. By default,
a session is created for you when needed. However, it's possible and
recommended that in some scenarios you maintain your own session.
Sessions typically store the following:
* Credentials
* AWS Region
* Other configurations related to your profile
Default session
===============
Boto3 acts as a proxy to the default session. This is created
automatically when you create a low-level client or resource client:
import boto3
# Using the default session
sqs = boto3.client('sqs')
s3 = boto3.resource('s3')
Custom session
==============
You can also manage your own session and create low-level clients or
resource clients from it:
import boto3
import boto3.session
# Create your own session
my_session = boto3.session.Session()
# Now we can create low-level clients or resource clients from our custom session
sqs = my_session.client('sqs')
s3 = my_session.resource('s3')
Session configurations
======================
You can configure each session with specific credentials, AWS Region
information, or profiles. The most common configurations you might use
are:
* "aws_access_key_id" - A specific AWS access key ID.
* "aws_secret_access_key" - A specific AWS secret access key.
* "region_name" - The AWS Region where you want to create new
connections.
* "profile_name" - The profile to use when creating your session.
Note:
Only set the "profile_name" parameter when a specific profile is
required for your session. To use the default profile, don’t set the
"profile_name" parameter at all. If the "profile_name" parameter
isn't set *and* there is no default profile, an empty config
dictionary will be used.For a detailed list of per-session
configurations, see the Session core reference.
Multithreading or multiprocessing with sessions
===============================================
Similar to "Resource" objects, "Session" objects are not thread safe
and should not be shared across threads and processes. It's
recommended to create a new "Session" object for each thread or
process:
import boto3
import boto3.session
import threading
class MyTask(threading.Thread):
def run(self):
# Here we create a new session per thread
session = boto3.session.Session()
# Next, we create a resource client using our thread's session object
s3 = session.resource('s3')
# Put your thread-safe code here
Amazon SQS examples
*******************
The code examples in this section demonstrate using the Amazon Web
Services (AWS) SDK for Python to call the Amazon Simple Queue Service
(Amazon SQS). For more information about Amazon SQS, see the Amazon
SQS documentation.
Each code example requires that your AWS credentials have been
configured as described in Quickstart. Some examples require
additional prerequisites which are described in the example's section.
The source files for these and other code examples are available in
the AWS Code Catalog on GitHub.
**Examples**
* Using queues in Amazon SQS
* Sending and receiving messages in Amazon SQS
* Managing visibility timeout in Amazon SQS
* Enabling long polling in Amazon SQS
* Using dead-letter queues in Amazon SQS
Using alarm actions in Amazon CloudWatch
****************************************
This Python example shows you how to:
* Create a CloudWatch alarm and enable actions
* Disable a CloudWatch alarm action
The scenario
============
Using alarm actions, you can create alarms that automatically stop,
terminate, reboot, or recover your Amazon EC2 instances. You can use
the stop or terminate actions when you no longer need an EC2 instance
to be running. You can use the reboot and recover actions to
automatically reboot those instances.
In this example, Python code is used to define an alarm action in
CloudWatch that triggers the reboot of an Amazon EC2 instance. The
code uses the AWS SDK for Python to manage Amazon EC2 instances using
these methods of the CloudWatch client class:
* put_metric_alarm.
* disable_alarm_actions.
For more information about CloudWatch alarm actions, see Create Alarms
to Stop, Terminate, Reboot, or Recover an Instance in the *Amazon
CloudWatch User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
* Configure your AWS credentials, as described in Quickstart.
* Create an IAM role whose policy grants permission to describe,
reboot, stop, or terminate an Amazon EC2 instance. For more
information about creating an IAM role, see Creating a Role to
Delegate Permissions to an AWS Service in the *IAM User Guide*.
Use the following role policy when creating the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:Describe*",
"ec2:Describe*",
"ec2:RebootInstances",
"ec2:StopInstances*",
"ec2:TerminateInstances"
],
"Resource": [
"*"
]
}
]
}
Create and enable actions on an alarm
=====================================
Create or update an alarm and associate it with the specified metric.
Optionally, this operation can associate one or more Amazon SNS
resources with the alarm.
When this operation creates an alarm, the alarm state is immediately
set to "INSUFFICIENT_DATA". The alarm is evaluated and its state is
set appropriately. Any actions associated with the state are then
executed.
When you update an existing alarm, its state is left unchanged, but
the update completely overwrites the previous configuration of the
alarm.
The example below shows how to:
* Create an alarm and enable actions using put_metric_alarm.
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Create alarm with actions enabled
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=True,
AlarmActions=[
'arn:aws:swf:us-west-2:{CUSTOMER_ACCOUNT}:action/actions/AWS_EC2.InstanceId.Reboot/1.0'
],
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'INSTANCE_ID'
},
],
Unit='Seconds'
)
Disable actions on an alarm
===========================
Disable the actions for the specified alarms. When an alarm's actions
are disabled, the alarm actions do not execute when the alarm state
changes.
The example below shows how to:
* Disable metric alarm actions using disable_alarm_actions.
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Disable alarm
cloudwatch.disable_alarm_actions(
AlarmNames=['Web_Server_CPU_Utilization'],
)
Presigned URLs
**************
A user who does not have AWS credentials or permission to access an S3
object can be granted temporary access by using a presigned URL.
A presigned URL is generated by an AWS user who has access to the
object. The generated URL is then given to the unauthorized user. The
presigned URL can be entered in a browser or used by a program or HTML
webpage. The credentials used by the presigned URL are those of the
AWS user who generated the URL.
A presigned URL remains valid for a limited period of time which is
specified when the URL is generated.
import logging
import boto3
from botocore.exceptions import ClientError
def create_presigned_url(bucket_name, object_name, expiration=3600):
"""Generate a presigned URL to share an S3 object
:param bucket_name: string
:param object_name: string
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Presigned URL as string. If error, returns None.
"""
# Generate a presigned URL for the S3 object
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_url(
'get_object',
Params={'Bucket': bucket_name, 'Key': object_name},
ExpiresIn=expiration,
)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL
return response
The user can download the S3 object by entering the presigned URL in a
browser. A program or HTML page can download the S3 object by using
the presigned URL as part of an HTTP GET request.
The following code demonstrates using the Python "requests" package to
perform a GET request.
import requests # To install: pip install requests
url = create_presigned_url('amzn-s3-demo-bucket', 'OBJECT_NAME')
if url is not None:
response = requests.get(url)
Using presigned URLs to perform other S3 operations
===================================================
The main purpose of presigned URLs is to grant a user temporary access
to an S3 object. However, presigned URLs can be used to grant
permission to perform additional operations on S3 buckets and objects.
The "create_presigned_url_expanded" method shown below generates a
presigned URL to perform a specified S3 operation. The method accepts
the name of the S3 "Client" method to perform, such as 'list_buckets'
or 'get_bucket_location.' The parameters to pass to the method are
specified in the "method_parameters" dictionary argument. The HTTP
method to use (GET, PUT, etc.) can be specified, but the AWS SDK for
Python will automatically select the appropriate method so this
argument is not normally required.
import logging
import boto3
from botocore.exceptions import ClientError
def create_presigned_url_expanded(
client_method_name, method_parameters=None, expiration=3600, http_method=None
):
"""Generate a presigned URL to invoke an S3.Client method
Not all the client methods provided in the AWS Python SDK are supported.
:param client_method_name: Name of the S3.Client method, e.g., 'list_buckets'
:param method_parameters: Dictionary of parameters to send to the method
:param expiration: Time in seconds for the presigned URL to remain valid
:param http_method: HTTP method to use (GET, etc.)
:return: Presigned URL as string. If error, returns None.
"""
# Generate a presigned URL for the S3 client method
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_url(
ClientMethod=client_method_name,
Params=method_parameters,
ExpiresIn=expiration,
HttpMethod=http_method,
)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL
return response
Generating a presigned URL to upload a file
===========================================
A user who does not have AWS credentials to upload a file can use a
presigned URL to perform the upload. The upload operation makes an
HTTP POST request and requires additional parameters to be sent as
part of the request.
import logging
import boto3
from botocore.exceptions import ClientError
def create_presigned_post(
bucket_name, object_name, fields=None, conditions=None, expiration=3600
):
"""Generate a presigned URL S3 POST request to upload a file
:param bucket_name: string
:param object_name: string
:param fields: Dictionary of prefilled form fields
:param conditions: List of conditions to include in the policy
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Dictionary with the following keys:
url: URL to post to
fields: Dictionary of form fields and values to submit with the POST
:return: None if error.
"""
# Generate a presigned S3 POST URL
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_post(
bucket_name,
object_name,
Fields=fields,
Conditions=conditions,
ExpiresIn=expiration,
)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL and required fields
return response
The generated presigned URL includes both a URL and additional fields
that must be passed as part of the subsequent HTTP POST request.
The following code demonstrates how to use the "requests" package with
a presigned POST URL to perform a POST request to upload a file to S3.
import requests # To install: pip install requests
# Generate a presigned S3 POST URL
object_name = 'OBJECT_NAME'
response = create_presigned_post('amzn-s3-demo-bucket', object_name)
if response is None:
exit(1)
# Demonstrate how another Python program can use the presigned URL to upload a file
with open(object_name, 'rb') as f:
files = {'file': (object_name, f)}
http_response = requests.post(response['url'], data=response['fields'], files=files)
# If successful, returns HTTP status code 204
logging.info(f'File upload HTTP status code: {http_response.status_code}')
The presigned POST URL and fields values can also be used in an HTML
page.
Managing email filters with SES API
***********************************
In addition to sending emails, you can also receive email with Amazon
Simple Email Service (SES). An IP address filter enables you to
optionally specify whether to accept or reject mail that originates
from an IP address or range of IP addresses. For more information, see
Managing IP Address Filters for Amazon SES Email Receiving.
The following examples show how to:
* Create an email filter using create_receipt_filter().
* List all email filters using list_receipt_filters().
* Remove an email filter using delete_receipt_filter().
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Configure your AWS credentials, as described in Quickstart.
Create an email filter
======================
To allow or block emails from a specific IP address, use the
CreateReceiptFilter operation. Provide the IP address or range of
addresses and a unique name to identify this filter.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
# Create receipt filter
response = ses.create_receipt_filter(
Filter = {
'NAME' : 'NAME',
'IpFilter' : {
'Cidr' : 'IP_ADDRESS_OR_RANGE',
'Policy' : 'Allow'
}
}
)
print(response)
List all email filters
======================
To list the IP address filters associated with your AWS account in the
current AWS Region, use the ListReceiptFilters operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.list_receipt_filters()
print(response)
Delete an email filter
======================
To remove an existing filter for a specific IP address use the
DeleteReceiptFilter operation. Provide the unique filter name to
identify the receipt filter to delete.
If you need to change the range of addresses that are filtered, you
can delete a receipt filter and create a new one.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.delete_receipt_filter(
FilterName = 'NAME'
)
print(response)
Managing IAM account aliases
****************************
This Python example shows you how to manage aliases for your AWS
account ID.
The scenario
============
If you want the URL for your sign-in page to contain your company name
or other friendly identifier instead of your AWS account ID, you can
create an alias for your AWS account ID. If you create an AWS account
alias, your sign-in page URL changes to incorporate the alias.
In this example, Python code is used to create and manage IAM account
aliases. The code uses the AWS SDK for Python to manage IAM access
keys using these methods of the IAM client class:
* create_account_alias.
* get_paginator('list_account_aliases').
* delete_account_alias.
For more information about IAM account aliases, see Your AWS Account
ID and Its Alias in the *IAM User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Create an account alias
=======================
Create an alias for your AWS account. For information about using an
AWS account alias, see Using an Alias for Your AWS Account ID in the
*IAM User Guide*.
The example below shows how to:
* Create an account alias using create_account_alias.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Create an account alias
iam.create_account_alias(
AccountAlias='ALIAS'
)
List an account alias
=====================
List the account alias associated with the AWS account (Note: you can
have only one). For information about using an AWS account alias, see
Using an Alias for Your AWS Account ID in the *IAM User Guide*.
The example below shows how to:
* List account aliases using get_paginator('list_account_aliases').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# List account aliases through the pagination interface
paginator = iam.get_paginator('list_account_aliases')
for response in paginator.paginate():
print(response['AccountAliases'])
Delete an account alias
=======================
Delete the specified AWS account alias. For information about using an
AWS account alias, see Using an Alias for Your AWS Account ID in the
*IAM User Guide*.
The example below shows how to:
* Delete an account alias using delete_account_alias.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Delete an account alias
iam.delete_account_alias(
AccountAlias='ALIAS'
)
Extensibility guide
*******************
All of Boto3's resource and client classes are generated at runtime.
This means that you cannot directly inherit and then extend the
functionality of these classes because they do not exist until the
program actually starts running.
However it is still possible to extend the functionality of classes
through Boto3's event system.
An introduction to the event system
===================================
Boto3's event system allows users to register a function to a specific
event. Then once the running program reaches a line that emits that
specific event, Boto3 will call every function registered to the event
in the order in which they were registered.
When Boto3 calls each of these registered functions, it will call each
of them with a specific set of keyword arguments that are associated
with that event. Then once the registered function is called, the
function may modify the keyword arguments passed to that function or
return a value. Here is an example of how the event system works:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function
def add_my_bucket(params, **kwargs):
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket'
# Register the function to an event
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_bucket)
response = s3.list_objects_v2()
In this example, the handler "add_my_bucket" is registered such that
the handler will inject the value "'amzn-s3-demo-bucket'" for the
"Bucket" parameter whenever the "list_objects_v2" client call is made
without the "Bucket" parameter. Note that if the same
"list_objects_v2" call is made without the "Bucket" parameter and the
registered handler, it will result in a validation error.
Here are the takeaways from this example:
* All clients have their own event system that you can use to fire
events and register functions. You can access the event system
through the "meta.events" attribute on the client.
* All functions registered to the event system must have "**kwargs" in
the function signature. This is because emitting an event can have
any number of keyword arguments emitted alongside it, and so if your
function is called without "**kwargs", its signature will have to
match every keyword argument emitted by the event. This also allows
for more keyword arguments to be added to the emitted event in the
future without breaking existing handlers.
* To register a function to an event, call the "register" method on
the event system with the name of the event you want to register the
function to and the function handle. Note that if you register the
event after the event is emitted, the function will not be called
unless the event is emitted again. In the example, the
"add_my_bucket" handler was registered to the "'provide-client-
params.s3.ListObjectsV2'" event, which is an event that can be used
to inject and modify parameters passed in by the client method. To
read more about the event refer to provide-client-params
A hierarchical structure
========================
The event system also provides a hierarchy for registering events such
that you can register a function to a set of events depending on the
event name hierarchy.
An event name can have its own hierarchy by specifying "." in its
name. For example, take the event name
"'general.specific.more_specific'". When this event is emitted, the
registered functions will be called in the order from most specific to
least specific registration. So in this example, the functions will be
called in the following order:
1. Functions registered to "'general.specific.more_specific'"
2. Functions registered to "'general.specific'"
3. Functions registered to "'general'"
Here is a deeper example of how the event system works with respect to
its hierarchical structure:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
def add_my_general_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket1'
def add_my_specific_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket2'
event_system.register('provide-client-params.s3', add_my_general_bucket)
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_specific_bucket)
list_obj_response = s3.list_objects_v2()
put_obj_response = s3.put_object(Key='mykey', Body=b'my body')
In this example, the "list_objects_v2" method call will use the
"'amzn-s3-demo-bucket2'" for the bucket instead of "'amzn-s3-demo-
bucket1'" because the "add_my_specific_bucket" method was registered
to the "'provide-client-params.s3.ListObjectsV2'" event which is more
specific than the "'provide-client-params.s3'" event. Thus, the
"add_my_specific_bucket" function is called before the
"add_my_general_bucket" function is called when the event is emitted.
However for the "put_object" call, the bucket used is "'amzn-s3-demo-
bucket1'". This is because the event emitted for the "put_object"
client call is "'provide-client-params.s3.PutObject'" and the
"add_my_general_bucket" method is called via its registration to
"'provide-client-params.s3'". The "'provide-client-
params.s3.ListObjectsV2'" event is never emitted so the registered
"add_my_specific_bucket" function is never called.
Wildcard matching
=================
Another aspect of Boto3's event system is that it has the capability
to do wildcard matching using the "'*'" notation. Here is an example
of using wildcards in the event system:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
def add_my_wildcard_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket'
event_system.register('provide-client-params.s3.*', add_my_wildcard_bucket)
response = s3.list_objects_v2()
The "'*'" allows you to register to a group of events without having
to know the actual name of the event. This is useful when you have to
apply the same handler in multiple places. Also note that if the
wildcard is used, it must be isolated. It does not handle globbing
with additional characters. So in the previous example, if the
"my_wildcard_function" was registered to "'provide-client-
params.s3.*objects'", the handler would not be called because it will
consider "'provide-client-params.s3.*objects'" to be a specific event.
The wildcard also respects the hierarchical structure of the event
system. If another handler was registered to the "'provide-client-
params.s3'" event, the "add_my_wildcard_bucket" would be called first
because it is registered to "'provide-client-params.s3.*'" which is
more specific than the event "'provide-client.s3'".
Isolation of event systems
==========================
The event system in Boto3 has the notion of isolation: all clients
maintain their own set of registered handlers. For example if a
handler is registered to one client's event system, it will not be
registered to another client's event system:
import boto3
client1 = boto3.client('s3')
client2 = boto3.client('s3')
def add_my_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket1'
def add_my_other_bucket(params, **kwargs):
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket2'
client1.meta.events.register(
'provide-client-params.s3.ListObjectsV2', add_my_bucket)
client2.meta.events.register(
'provide-client-params.s3.ListObjectsV2', add_my_other_bucket)
client1_response = client1.list_objects_v2()
client2_response = client2.list_objects_v2()
Thanks to the isolation of clients' event systems, "client1" will
inject "'amzn-s3-demo-bucket1'" for its "list_objects_v2" method call
while "client2" will inject "'amzn-s3-demo-bucket2'" for its
"list_objects_v2" method call because "add_my_bucket" was registered
to "client1" while "add_my_other_bucket" was registered to "client2".
Boto3 specific events
=====================
Boto3 emits a set of events that users can register to customize
clients or resources and modify the behavior of method calls.
Here is a table of events that users of Boto3 can register handlers
to. More information about each event can be found in the
corresponding sections below:
Note:
Events with a "*" in their order number are conditionally emitted
while all others are always emitted. An explanation of all 3
conditional events is provided below."2 *" - "creating-resource-
class" is emitted ONLY when using a service resource."8 *" - "after-
call" is emitted once the API response is received."9 *" - "after-
call-error" is emitted when an unsuccessful API response is
received.
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Event Name | Order | Emit Location |
|===============================|=========|============================================================================================================================================================|
| "creating-client-class" | 1 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "creating-resource-class" | 2 * | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "provide-client-params" | 3 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "before-call" | 4 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "request-created" | 5 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "before-send" | 6 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "needs-retry" | 7 | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "after-call" | 8 * | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "after-call-error" | 9 * | Location |
+-------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------+
Note:
If any of the following keywords are included in an event's full
name, you'll need to replace it with the corresponding value:
* "service-name" - The value used to instantiate a client as in
"boto3.client('service-name')".
* "operation-name" - The underlying API operation name of the
corresponding client method. To access the operation API name,
retrieve the value from the "client.meta.method_to_api_mapping"
dictionary using the name of the desired client method as the key.
* "resource-name" - The name of the resource class such as
"ServiceResource".
*creating-client-class*
-----------------------
Full Event Name:
"'creating-client-class.service-name'"
Description:
This event is emitted upon creation of the client class for a
service. The client class for a service is not created until the
first instantiation of the client class. Use this event for adding
methods to the client class or adding classes for the client class
to inherit from.
Keyword Arguments Emitted:
type class_attributes:
"dict"
param class_attributes:
A dictionary where the keys are the names of the attributes of
the class and the values are the actual attributes of the class.
type base_classes:
"list"
param base_classes:
A list of classes that the client class will inherit from where
the order of inheritance is the same as the order of the list.
Expected Return Value:
"None"
Example:
Here is an example of how to add a method to the client class:
from boto3.session import Session
def custom_method(self):
print('This is my custom method')
def add_custom_method(class_attributes, **kwargs):
class_attributes['my_method'] = custom_method
session = Session()
session.events.register('creating-client-class.s3', add_custom_method)
client = session.client('s3')
client.my_method()
This should output:
This is my custom method
Here is an example of how to add a new class for the client class
to inherit from:
from boto3.session import Session
class MyClass(object):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(*args, **kwargs)
print('Client instantiated!')
def add_custom_class(base_classes, **kwargs):
base_classes.insert(0, MyClass)
session = Session()
session.events.register('creating-client-class.s3', add_custom_class)
client = session.client('s3')
This should output:
Client instantiated!
*creating-resource-class*
-------------------------
Full Event Name:
"'creating-resource-class.service-name.resource-name'"
Description:
This event is emitted upon creation of the resource class. The
resource class is not created until the first instantiation of the
resource class. Use this event for adding methods to the resource
class or adding classes for the resource class to inherit from.
Keyword Arguments Emitted:
type class_attributes:
"dict"
param class_attributes:
A dictionary where the keys are the names of the attributes of
the class and the values are the actual attributes of the class.
type base_classes:
"list"
param base_classes:
A list of classes that the resource class will inherit from
where the order of inheritance is the same as the order of the
list.
Expected Return Value:
"None"
Example:
Here is an example of how to add a method to a resource class:
from boto3.session import Session
def custom_method(self):
print('This is my custom method')
def add_custom_method(class_attributes, **kwargs):
class_attributes['my_method'] = custom_method
session = Session()
session.events.register('creating-resource-class.s3.ServiceResource',
add_custom_method)
resource = session.resource('s3')
resource.my_method()
This should output:
This is my custom method
Here is an example of how to add a new class for a resource class
to inherit from:
from boto3.session import Session
class MyClass(object):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(*args, **kwargs)
print('Resource instantiated!')
def add_custom_class(base_classes, **kwargs):
base_classes.insert(0, MyClass)
session = Session()
session.events.register('creating-resource-class.s3.ServiceResource',
add_custom_class)
resource = session.resource('s3')
This should output:
Resource instantiated!
*provide-client-params*
-----------------------
Full Event Name:
"'provide-client-params.service-name.operation-name'"
Description:
This event is emitted before operation parameters are validated and
built into the HTTP request that will be sent over the wire. Use
this event to inject or modify parameters.
Keyword Arguments Emitted:
type params:
"dict"
param params:
A dictionary containing key value pairs consisting of the
parameters passed through to the client method.
type model:
"botocore.model.OperationModel"
param model:
A model representing the underlying API operation of the client
method.
Expected Return Value:
"None" or return a "dict" containing parameters to use when making
the request.
Example:
Here is an example of how to inject a parameter using the event:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function
def add_my_bucket(params, **kwargs):
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'amzn-s3-demo-bucket'
# Register the function to an event
event_system.register('provide-client-params.s3.ListObjectsV2', add_my_bucket)
response = s3.list_objects_v2()
*before-call*
-------------
Full Event Name:
"'before-call.service-name.operation-name'"
Description:
This event is emitted just before creating and sending the HTTP
request. Use this event for modifying various HTTP request
components prior to the request being created. A response tuple may
optionally be returned to trigger a short-circuit and prevent the
request from being made. This is useful for testing and is how the
botocore stubber mocks responses.
Keyword Arguments Emitted:
type model:
"botocore.model.OperationModel"
param model:
A model representing the underlying API operation of the client
method.
type params:
"dict"
param params:
A dictionary containing key value pairs for various components
of an HTTP request such as "url_path", "host_prefix",
"query_string", "headers", "body", and "method".
type request_signer:
"botocore.signers.RequestSigner"
param request_signer:
An object to sign requests before they are sent over the wire
using one of the authentication mechanisms defined in "auth.py".
Expected Return Value:
"None" or a "tuple" that includes both the
"botocore.awsrequest.AWSResponse" and a "dict" that represents the
parsed response described by the model.
Example:
Here is an example of how to add a custom header before making an
API call:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function that adds a custom header and prints all headers.
def add_custom_header_before_call(model, params, request_signer, **kwargs):
params['headers']['my-custom-header'] = 'header-info'
headers = params['headers']
print(f'param headers: {headers}')
# Register the function to an event.
event_system.register('before-call.s3.ListBuckets', add_custom_header_before_call)
s3.list_buckets()
This should output:
param headers: { ... , 'my-custom-header': 'header-info'}
*request-created*
-----------------
Full Event Name:
"'request-created.service-name.operation-name'"
Description:
This event is emitted just after the request is created and
triggers request signing.
Keyword Arguments Emitted:
type request:
"botocore.awsrequest.AWSRequest"
param request:
An AWSRequest object which represents the request that was
created given some params and an operation model.
type operation_name:
"str"
param operation_name:
The name of the service operation model i.e. "ListObjectsV2".
Expected Return Value:
"None"
Example:
Here is an example of how to inspect the request once it's created:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function that prints the request information.
def inspect_request_created(request, operation_name, **kwargs):
print('Request Info:')
print(f'method: {request.method}')
print(f'url: {request.url}')
print(f'data: {request.data}')
print(f'params: {request.params}')
print(f'auth_path: {request.auth_path}')
print(f'stream_output: {request.stream_output}')
print(f'headers: {request.headers}')
print(f'operation_name: {operation_name}')
# Register the function to an event
event_system.register('request-created.s3.ListObjectsV2', inspect_request_created)
response = s3.list_objects_v2(Bucket='amzn-s3-demo-bucket')
This should output:
Request Info:
method: GET
url: https://amzn-s3-demo-bucket.s3 ...
data: ...
params: { ... }
auth_path: ...
stream_output: ...
headers: ...
operation_name: ListObjectsV2
*before-send*
-------------
Full Event Name:
"'before-send.service-name.operation-name'"
Description:
This event is emitted when the operation has been fully serialized,
signed, and is ready to be sent over the wire. This event allows
the finalized request to be inspected and allows a response to be
returned that fulfills the request. If no response is returned
botocore will fulfill the request as normal.
Keyword Arguments Emitted:
type request:
"botocore.awsrequest.AWSPreparedRequest"
param request:
A data class representing a finalized request to be sent over
the wire.
Expected Return Value:
"None" or an instance of "botocore.awsrequest.AWSResponse".
Example:
Here is an example of how to register a function that allows you to
inspect the prepared request before it's sent:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function that inspects the prepared request.
def inspect_request_before_send(request, **kwargs):
print(f'request: {request}')
# Register the function to an event
event_system.register('before-send.s3.ListBuckets', inspect_request_before_send)
s3.list_buckets()
This should output:
request:
*needs-retry*
-------------
Full Event Name:
"'needs-retry.service-name.operation-name'"
Description:
This event is emitted before checking if the most recent request
needs to be retried. Use this event to define custom retry behavior
when the configurable retry modes are not sufficient.
Keyword Arguments Emitted:
type response:
"tuple"
param response:
A tuple that includes both the "botocore.awsrequest.AWSResponse"
and a "dict" that represents the parsed response described by
the model.
type endpoint:
"botocore.endpoint.Endpoint"
param endpoint:
Represents an endpoint for a particular service.
type operation:
"botocore.model.OperationModel"
param operation:
A model representing the underlying API operation of the client
method.
type attempts:
"int"
param attempts:
A number representing the amount of retries that have been
attempted.
type caught_exception:
"Exception" | "None"
param caught_exception:
The exception raised after making an api call. If there was no
exception, this will be None.
type request_dict:
"dict"
param request_dict:
A dictionary containing key value pairs for various components
of an HTTP request such as "url_path", "host_prefix",
"query_string", "headers", "body", and "method".
Expected Return Value:
Return "None" if no retry is needed, or return an "int"
representing the retry delay in seconds.
Example:
Here is an example of how to add custom retry behavior:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a handler that determines retry behavior.
def needs_retry_handler(**kwargs):
# Implement custom retry logic
if some_condition:
return None
else:
return some_delay
# Register the function to an event
event_system.register('needs-retry', needs_retry_handler)
s3.list_buckets()
*after-call*
------------
Full Event Name:
"'after-call.service-name.operation-name'"
Description:
This event is emitted just after the service client makes an API
call. This event allows developers to postprocess or inspect the
API response according to the specific requirements of their
application if needed.
Keyword Arguments Emitted:
type http_response:
"botocore.awsrequest.AWSResponse"
param http_response:
A data class representing an HTTP response received from the
server.
type parsed:
"dict"
param params:
A parsed version of the AWSResponse in the form of a python
dictionary.
type model:
"botocore.model.OperationModel"
param model:
A model representing the underlying API operation of the client
method.
Expected Return Value:
"None"
Example:
Here is an example that inspects args emitted from the "after-call"
event:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function that prints the after-call event args.
def print_after_call_args(http_response, parsed, model, **kwargs):
print(f'http_response: {http_response}')
print(f'parsed: {parsed}')
print(f'model: {model.name}')
# Register the function to an event
event_system.register('after-call.s3.ListObjectsV2', print_after_call_args)
s3.list_objects_v2(Bucket='amzn-s3-demo-bucket')
This should output:
http_response:
parsed: { ... }
model: ListObjectsV2
*after-call-error*
------------------
Full Event Name:
"'after-call-error.service-name.operation-name'"
Description:
This event is emitted upon receiving an error after making an API
call. This event provides information about any errors encountered
during the operation and allows listeners to take corrective
actions if necessary.
Keyword Arguments Emitted:
type exception:
"Exception"
param exception:
The exception raised after making an api call.
Expected Return Value:
"None"
Example:
Here is an example we use the "before-send" to mimic a bad response
which triggers the "after-call-error" event and prints the
exception:
import boto3
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Prints the detected exception.
def print_after_call_error_args(exception, **kwargs):
if exception is not None:
print(f'Exception Detected: {exception}')
# Mocks an exception raised when making an API call.
def list_objects_v2_bad_response(**kwargs):
raise Exception("This is a test exception.")
event_system.register('before-send.s3.ListBuckets', list_objects_v2_bad_response)
event_system.register('after-call-error.s3.ListBuckets', print_after_call_error_args)
s3.list_buckets()
This should output:
Exception Detected: This is a test exception.
# Stack Trace
Exception: This is a test exception.
Managing Amazon EC2 instances
*****************************
This Python example shows you how to:
* Get basic information about your Amazon EC2 instances
* Start and stop detailed monitoring of an Amazon EC2 instance
* Start and stop an Amazon EC2 instance
* Reboot an Amazon EC2 instance
The scenario
============
In this example, Python code is used perform several basic instance
management operations. The code uses the AWS SDK for Python to manage
the instances by using these methods of the EC2 client class:
* describe_instances.
* monitor_instances.
* unmonitor_instances.
* start_instances.
* stop_instances.
* reboot_instances.
For more information about the lifecycle of Amazon EC2 instances, see
Instance Lifecycle in the *Amazon EC2 User Guide for Linux Instances*
or Instance Lifecycle in the *Amazon EC2 User Guide for Windows
Instances*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe instances
==================
An EC2 instance is a virtual server in Amazon's Elastic Compute Cloud
(EC2) for running applications on the Amazon Web Services (AWS)
infrastructure.
The example below shows how to:
* Describe one or more EC2 instances using describe_instances.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Example
-------
import boto3
ec2 = boto3.client('ec2')
response = ec2.describe_instances()
print(response)
Monitor and unmonitor instances
===============================
Enable or disable detailed monitoring for a running instance. If
detailed monitoring is not enabled, basic monitoring is enabled. For
more information, see Monitoring Your Instances and Volumes in the
*Amazon Elastic Compute Cloud User Guide*.
The example below shows how to:
* Enable detailed monitoring for a running instance using
monitor_instances.
* Disable detailed monitoring for a running instance using
unmonitor_instances.
Example
-------
import sys
import boto3
ec2 = boto3.client('ec2')
if sys.argv[1] == 'ON':
response = ec2.monitor_instances(InstanceIds=['INSTANCE_ID'])
else:
response = ec2.unmonitor_instances(InstanceIds=['INSTANCE_ID'])
print(response)
Start and stop instances
========================
Instances that use Amazon EBS volumes as their root devices can be
quickly stopped and started. When an instance is stopped, the compute
resources are released and you are not billed for hourly instance
usage. However, your root partition Amazon EBS volume remains,
continues to persist your data, and you are charged for Amazon EBS
volume usage. You can restart your instance at any time. Each time you
transition an instance from stopped to started, Amazon EC2 charges a
full instance hour, even if transitions happen multiple times within a
single hour.
The example below shows how to:
* Start an Amazon EBS-backed AMI that you've previously stopped using
start_instances.
* Stop an Amazon EBS-backed instance using stop_instances.
Example
-------
import sys
import boto3
from botocore.exceptions import ClientError
instance_id = sys.argv[2]
action = sys.argv[1].upper()
ec2 = boto3.client('ec2')
if action == 'ON':
# Do a dryrun first to verify permissions
try:
ec2.start_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, run start_instances without dryrun
try:
response = ec2.start_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)
else:
# Do a dryrun first to verify permissions
try:
ec2.stop_instances(InstanceIds=[instance_id], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
raise
# Dry run succeeded, call stop_instances without dryrun
try:
response = ec2.stop_instances(InstanceIds=[instance_id], DryRun=False)
print(response)
except ClientError as e:
print(e)
Reboot instances
================
Request a reboot of one or more instances. This operation is
asynchronous; it only queues a request to reboot the specified
instances. The operation succeeds if the instances are valid and
belong to you. Requests to reboot terminated instances are ignored.
The example below shows how to:
* Request a reboot of one or more instances using reboot_instances.
Example
-------
import boto3
from botocore.exceptions import ClientError
ec2 = boto3.client('ec2')
try:
ec2.reboot_instances(InstanceIds=['INSTANCE_ID'], DryRun=True)
except ClientError as e:
if 'DryRunOperation' not in str(e):
print("You don't have permission to reboot instances.")
raise
try:
response = ec2.reboot_instances(InstanceIds=['INSTANCE_ID'], DryRun=False)
print('Success', response)
except ClientError as e:
print('Error', e)
Downloading files
*****************
The methods provided by the AWS SDK for Python to download files are
similar to those provided to upload files.
The "download_file" method accepts the names of the bucket and object
to download and the filename to save the file to.
import boto3
s3 = boto3.client('s3')
s3.download_file('amzn-s3-demo-bucket', 'OBJECT_NAME', 'FILE_NAME')
The "download_fileobj" method accepts a writeable file-like object.
The file object must be opened in binary mode, not text mode.
s3 = boto3.client('s3')
with open('FILE_NAME', 'wb') as f:
s3.download_fileobj('amzn-s3-demo-bucket', 'OBJECT_NAME', f)
Like their upload cousins, the download methods are provided by the S3
"Client", "Bucket", and "Object" classes, and each class provides
identical functionality. Use whichever class is convenient.
Also like the upload methods, the download methods support the
optional "ExtraArgs" and "Callback" parameters.
The list of valid "ExtraArgs" settings for the download methods is
specified in the "ALLOWED_DOWNLOAD_ARGS" attribute of the "S3Transfer"
object at "boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS".
The download method's "Callback" parameter is used for the same
purpose as the upload method's. The upload and download methods can
both invoke the same "Callback" class.
Working with Amazon EC2 key pairs
*********************************
This Python example shows you how to:
* Get information about your key pairs
* Create a key pair to access an Amazon EC2 instance
* Delete an existing key pair
The scenario
============
Amazon EC2 uses public–key cryptography to encrypt and decrypt login
information. Public–key cryptography uses a public key to encrypt
data, then the recipient uses the private key to decrypt the data. The
public and private keys are known as a key pair.
In this example, Python code is used to perform several Amazon EC2 key
pair management operations. The code uses the AWS SDK for Python to
manage IAM access keys using these methods of the EC2 client class:
* describe_key_pairs.
* create_key_pair.
* delete_key_pair.
For more information about the Amazon EC2 key pairs, see Amazon EC2
Key Pairs in the *Amazon EC2 User Guide for Linux Instances* or Amazon
EC2 Key Pairs and Windows Instances in the *Amazon EC2 User Guide for
Windows Instances*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe key pairs
==================
Describe one or more of your key pairs.
The example below shows how to:
* Describe keypairs using describe_key_pairs.
Example
-------
import boto3
ec2 = boto3.client('ec2')
response = ec2.describe_key_pairs()
print(response)
Create a key pair
=================
Create a 2048-bit RSA key pair with the specified name. Amazon EC2
stores the public key and displays the private key for you to save to
a file. The private key is returned as an unencrypted PEM encoded
PKCS#8 private key. If a key with the specified name already exists,
Amazon EC2 returns an error.
The example below shows how to:
* Create a 2048-bit RSA key pair with a specified name using
create_key_pair.
Example
-------
import boto3
ec2 = boto3.client('ec2')
response = ec2.create_key_pair(KeyName='KEY_PAIR_NAME')
print(response)
Delete a key pair
=================
Delete the specified key pair, by removing the public key from Amazon
EC2.
The example below shows how to:
* Delete a key pair by removing the public key from Amazon EC2 using
delete_key_pair.
Example
-------
import boto3
ec2 = boto3.client('ec2')
response = ec2.delete_key_pair(KeyName='KEY_PAIR_NAME')
print(response)
Code Examples
*************
This section describes code examples that demonstrate how to use the
AWS SDK for Python to call various AWS services. The source files for
the examples, plus additional example programs, are available in the
AWS Code Catalog.
To propose a new code example for the AWS documentation team to
consider producing, create a new request. The team is looking to
produce code examples that cover broader scenarios and use cases,
versus simple code snippets that cover only individual API calls. For
instructions, see the "Proposing new code examples" section in the
Readme on GitHub.
Before running an example, your AWS credentials must be configured as
described in Quickstart.
* Amazon CloudWatch examples
* Amazon DynamoDB
* Amazon EC2 examples
* AWS Identity and Access Management examples
* AWS Key Management Service (AWS KMS) examples
* Amazon S3 examples
* AWS Secrets Manager
* Amazon SES examples
* Amazon SQS examples
File transfer configuration
***************************
When uploading, downloading, or copying a file or S3 object, the AWS
SDK for Python automatically manages retries and multipart and non-
multipart transfers.
The management operations are performed by using reasonable default
settings that are well-suited for most scenarios. To handle a special
case, the default settings can be configured to meet requirements.
Configuration settings are stored in a
"boto3.s3.transfer.TransferConfig" object. The object is passed to a
transfer method ("upload_file", "download_file", etc.) in the
"Config=" parameter.
The remaining sections demonstrate how to configure various transfer
operations with the "TransferConfig" object.
Multipart transfers
===================
Multipart transfers occur when the file size exceeds the value of the
"multipart_threshold" attribute.
The following example configures an "upload_file" transfer to be
multipart if the file size is larger than the threshold specified in
the "TransferConfig" object.
import boto3
from boto3.s3.transfer import TransferConfig
# Set the desired multipart threshold value (5GB)
GB = 1024 ** 3
config = TransferConfig(multipart_threshold=5*GB)
# Perform the transfer
s3 = boto3.client('s3')
s3.upload_file('FILE_NAME', 'amzn-s3-demo-bucket', 'OBJECT_NAME', Config=config)
Concurrent transfer operations
==============================
The maximum number of concurrent S3 API transfer operations can be
tuned to adjust for the connection speed. Set the "max_concurrency"
attribute to increase or decrease bandwidth usage.
The attribute's default setting is 10. To reduce bandwidth usage,
reduce the value; to increase usage, increase it.
# To consume less downstream bandwidth, decrease the maximum concurrency
config = TransferConfig(max_concurrency=5)
# Download an S3 object
s3 = boto3.client('s3')
s3.download_file('amzn-s3-demo-bucket', 'OBJECT_NAME', 'FILE_NAME', Config=config)
Threads
=======
Transfer operations use threads to implement concurrency. Thread use
can be disabled by setting the "use_threads" attribute to "False."
If thread use is disabled, transfer concurrency does not occur.
Accordingly, the value of the "max_concurrency" attribute is ignored.
# Disable thread use/transfer concurrency
config = TransferConfig(use_threads=False)
s3 = boto3.client('s3')
s3.download_file('amzn-s3-demo-bucket', 'OBJECT_NAME', 'FILE_NAME', Config=config)
Verifying email identities in Amazon SES
****************************************
When you first start using your Amazon Simple Email Service (SES)
account, all senders and recipients must be verified in the same AWS
Region that you will be sending emails to. For more information about
sending emails, see Sending Email with Amazon SES.
The following examples show how to:
* Verify an email address using verify_email_identity().
* Verify an email domain using verify_domain_identity().
* List all email addresses or domains using list_identities().
* Remove an email address or domain using delete_identity().
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Configure your AWS credentials, as described in Quickstart.
Verify an email address
=======================
SES can send email only from verified email addresses or domains. By
verifying an email address, you demonstrate that you're the owner of
that address and want to allow SES to send email from that address.
When you run the following code example, SES sends an email to the
address you specified. When you (or the recipient of the email) click
the link in the email, the address is verified.
To add an email address to your SES account, use the
VerifyEmailIdentity operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.verify_email_identity(
EmailAddress = 'EMAIL_ADDRESS'
)
print(response)
Verify an email domain
======================
SES can send email only from verified email addresses or domains. By
verifying a domain, you demonstrate that you're the owner of that
domain. When you verify a domain, you allow SES to send email from any
address on that domain.
When you run the following code example, SES provides you with a
verification token. You have to add the token to your domain's DNS
configuration. For more information, see Verifying a Domain with
Amazon SES.
To add a sending domain to your SES account, use the
VerifyDomainIdentity operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.verify_domain_identity(
Domain='DOMAIN_NAME'
)
print(response)
List email addresses
====================
To retrieve a list of email addresses submitted in the current AWS
Region, regardless of verification status, use the ListIdentities
operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.list_identities(
IdentityType = 'EmailAddress',
MaxItems=10
)
print(response)
List email domains
==================
To retrieve a list of email domains submitted in the current AWS
Region, regardless of verification status use the ListIdentities
operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.list_identities(
IdentityType = 'Domain',
MaxItems=10
)
print(response)
Delete an email address
=======================
To delete a verified email address from the list of verified
identities, use the DeleteIdentity operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.delete_identity(
Identity = 'EMAIL_ADDRESS'
)
print(response)
Delete an email domain
======================
To delete a verified email domain from the list of verified
identities, use the DeleteIdentity operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.delete_identity(
Identity = 'DOMAIN_NAME'
)
print(response)
Creating and managing email rules with the SES API
**************************************************
In addition to sending emails, you can also receive email with Amazon
Simple Email Service (SES). Receipt rules enable you to specify what
SES does with email it receives for the email addresses or domains you
own. A rule can send email to other AWS services including but not
limited to Amazon S3, Amazon SNS, or AWS Lambda.
For more information, see Managing Receipt Rule Sets for Amazon SES
Email Receiving and Managing Receipt Rules for Amazon SES Email
Receiving.
The following examples show how to:
* Create a receipt rule set using create_receipt_rule_set().
* Create a receipt rule using create_receipt_rule().
* Remove a receipt rule using delete_receipt_rule().
* Remove a receipt rule set using delete_receipt_rule_set().
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Configure your AWS credentials, as described in Quickstart.
Create a receipt rule set
=========================
A receipt rule set contains a collection of receipt rules. You must
have at least one receipt rule set associated with your account before
you can create a receipt rule. To create a receipt rule set, provide a
unique RuleSetName and use the CreateReceiptRuleSet operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.create_receipt_rule_set(
RuleSetName = 'RULE_SET_NAME',
)
print(response)
Create a receipt rule
=====================
Control your incoming email by adding a receipt rule to an existing
receipt rule set. This example shows you how to create a receipt rule
that sends incoming messages to an Amazon S3 bucket, but you can also
send messages to Amazon SNS and AWS Lambda. To create a receipt rule,
provide a rule and the RuleSetName to the CreateReceiptRule operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.create_receipt_rule(
RuleSetName = 'RULE_SET_NAME',
Rule = {
'Name' : 'RULE_NAME',
'Enabled' : True,
'TlsPolicy' : 'Optional',
'Recipients': [
'EMAIL_ADDRESS',
],
'Actions' : [
{
'S3Action' : {
'BucketName' : 'amzn-s3-demo-bucket',
'ObjectKeyPrefix': 'SES_email'
}
}
],
}
)
print(response)
Delete a receipt rule set
=========================
Remove a specified receipt rule set that isn't currently disabled.
This also deletes all of the receipt rules it contains. To delete a
receipt rule set, provide the RuleSetName to the DeleteReceiptRuleSet
operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.delete_receipt_rule(
RuleName='RULE_NAME',
RuleSetName='RULE_SET_NAME'
)
print(response)
Delete a receipt rule
=====================
To delete a specified receipt rule, provide the RuleName and
RuleSetName to the DeleteReceiptRule operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.delete_receipt_rule_set(
RuleSetName = 'RULE_SET_NAME'
)
print(response)
Configuration
*************
Overview
========
Boto3 looks at various configuration locations until it finds
configuration values. Boto3 adheres to the following lookup order when
searching through sources for configuration values:
* A "Config" object that's created and passed as the "config"
parameter when creating a client
* Environment variables
* The "~/.aws/config" file
Note:
Configurations are not wholly atomic. This means configuration
values set in your AWS config file can be singularly overwritten by
setting a specific environment variable or through the use of a
"Config" object.
For details about credential configuration, see the Credentials guide.
Using the Config object
=======================
This option is for configuring client-specific configurations that
affect the behavior of your specific client object only. As described
earlier, there are options used here that will supersede those found
in other configuration locations:
* "region_name" (string) - The AWS Region used in instantiating the
client. If used, this takes precedence over environment variable and
configuration file values. But it doesn't overwrite a "region_name"
value *explicitly* passed to individual service methods.
* "signature_version" (string) - The signature version used when
signing requests. Note that the default version is Signature Version
4. If you're using a presigned URL with an expiry of greater than 7
days, you should specify Signature Version 2.
* "s3" (related configurations; dictionary) - Amazon S3 service-
specific configurations. For more information, see the Botocore
config reference.
* "proxies" (dictionary) - Each entry maps a protocol name to the
proxy server Boto3 should use to communicate using that protocol.
See Specifying proxy servers for more information.
* "proxies_config" (dictionary) - Additional proxy configuration
settings. For more information, see Configuring proxies.
* "retries" (dictionary) - Client retry behavior configuration options
that include retry mode and maximum retry attempts. For more
information, see the Retries guide.
For more information about additional options, or for a complete list
of options, see the Config reference.
To set these configuration options, create a "Config" object with the
options you want, and then pass them into your client.
import boto3
from botocore.config import Config
my_config = Config(
region_name = 'us-west-2',
signature_version = 'v4',
retries = {
'max_attempts': 10,
'mode': 'standard'
}
)
client = boto3.client('kinesis', config=my_config)
Using proxies
-------------
With Boto3, you can use proxies as intermediaries between your code
and AWS. Proxies can provide functions such as filtering, security,
firewalls, and privacy assurance.
Specifying proxy servers
~~~~~~~~~~~~~~~~~~~~~~~~
You can specify proxy servers to be used for connections when using
specific protocols. The "proxies" option in the "Config" object is a
dictionary in which each entry maps a protocol to the address and port
number of the proxy server for that protocol.
In the following example, a proxy list is set up to use
"proxy.amazon.com", port 6502 as the proxy for all HTTP requests by
default. HTTPS requests use port 2010 on "proxy.amazon.org" instead.
import boto3
from botocore.config import Config
proxy_definitions = {
'http': 'http://proxy.amazon.com:6502',
'https': 'https://proxy.amazon.org:2010'
}
my_config = Config(
region_name='us-east-2',
signature_version='v4',
proxies=proxy_definitions
)
client = boto3.client('kinesis', config=my_config)
Alternatively, you can use the "HTTP_PROXY" and "HTTPS_PROXY"
environment variables to specify proxy servers. Proxy servers
specified using the "proxies" option in the "Config" object will
override proxy servers specified using environment variables.
Configuring proxies
~~~~~~~~~~~~~~~~~~~
You can configure how Boto3 uses proxies by specifying the
"proxies_config" option, which is a dictionary that specifies the
values of several proxy options by name. There are three keys in this
dictionary: "proxy_ca_bundle", "proxy_client_cert", and
"proxy_use_forwarding_for_https". For more information about these
keys, see the Botocore config reference.
import boto3
from botocore.config import Config
proxy_definitions = {
'http': 'http://proxy.amazon.com:6502',
'https': 'https://proxy.amazon.org:2010'
}
my_config = Config(
region_name='us-east-2',
signature_version='v4',
proxies=proxy_definitions,
proxies_config={
'proxy_client_cert': '/path/of/certificate'
}
)
client = boto3.client('kinesis', config=my_config)
With the addition of the "proxies_config" option shown here, the proxy
will use the specified certificate file for authentication when using
the HTTPS proxy.
Using client context parameters
-------------------------------
Some services have configuration settings that are specific to their
clients. These settings are called client context parameters. Please
refer to the "Client Context Parameters" section of a service client's
documentation for a list of available parameters and information on
how to use them.
Configuring client context parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can configure client context parameters by passing a dictionary of
key-value pairs to the "client_context_params" parameter in your
"Config". Invalid parameter values or parameters that are not modeled
by the service will be ignored.
import boto3
from botocore.config import Config
my_config = Config(
region_name='us-east-2',
client_context_params={
'my_great_context_param': 'foo'
}
)
client = boto3.client('kinesis', config=my_config)
Boto3 does not support setting "client_context_params" per request.
Differing configurations will require creation of a new client.
Using environment variables
===========================
You can set configuration settings using system-wide environment
variables. These configurations are global and will affect all clients
created unless you override them with a "Config" object.
Note:
Only the configuration settings listed below can be set using
environment variables.
"AWS_ACCESS_KEY_ID"
The access key for your AWS account.
"AWS_SECRET_ACCESS_KEY"
The secret key for your AWS account.
"AWS_SESSION_TOKEN"
The session key for your AWS account. This is only needed when you
are using temporary credentials. The "AWS_SECURITY_TOKEN"
environment variable can also be used, but is only supported for
backward-compatibility purposes. "AWS_SESSION_TOKEN" is supported
by multiple AWS SDKs in addition to Boto3.
"AWS_DEFAULT_REGION"
The default AWS Region to use, for example, "us-west-1" or "us-
west-2".
"AWS_PROFILE"
The default profile to use, if any. If no value is specified,
Boto3 attempts to search the shared credentials file and the config
file for the "default" profile.
"AWS_CONFIG_FILE"
The location of the config file used by Boto3. By default this
value is "~/.aws/config". You only need to set this variable if
you want to change this location.
"AWS_SHARED_CREDENTIALS_FILE"
The location of the shared credentials file. By default this value
is "~/.aws/credentials". You only need to set this variable if you
want to change this location.
"BOTO_CONFIG"
The location of the Boto2 credentials file. This is not set by
default. You only need to set this variable if you want to use
credentials stored in Boto2 format in a location other than
"/etc/boto.cfg" or "~/.boto".
"AWS_CA_BUNDLE"
The path to a custom certificate bundle to use when establishing
SSL/TLS connections. Boto3 includes a CA bundle that it uses by
default, but you can set this environment variable to use a
different CA bundle.
"AWS_METADATA_SERVICE_TIMEOUT"
The number of seconds before a connection to the instance metadata
service should time out. When attempting to retrieve credentials
on an Amazon EC2 instance that is configured with an IAM role, a
connection to the instance metadata service will time out after 1
second by default. If you know you're running on an EC2 instance
with an IAM role configured, you can increase this value if needed.
"AWS_METADATA_SERVICE_NUM_ATTEMPTS"
When attempting to retrieve credentials on an Amazon EC2 instance
that has been configured with an IAM role, Boto3 will make only one
attempt to retrieve credentials from the instance metadata service
before giving up. If you know your code will be running on an EC2
instance, you can increase this value to make Boto3 retry multiple
times before giving up.
"AWS_DATA_PATH"
A list of **additional** directories to check when loading botocore
data. You typically don't need to set this value. There are two
built-in search paths: "/data/" and "~/.aws/models".
Setting this environment variable indicates additional directories
to check first before falling back to the built-in search paths.
Multiple entries should be separated with the "os.pathsep"
character, which is ":" on Linux and ";" on Windows.
"AWS_STS_REGIONAL_ENDPOINTS"
Sets AWS STS endpoint resolution logic. See the
"sts_regional_endpoints" configuration file section for more
information on how to use this.
"AWS_MAX_ATTEMPTS"
The total number of attempts made for a single request. For more
information, see the "max_attempts" configuration file section.
"AWS_RETRY_MODE"
Specifies the types of retries the SDK will use. For more
information, see the "retry_mode" configuration file section.
"AWS_SDK_UA_APP_ID"
AppId is an optional application specific identifier that can be
set. When set it will be appended to the User-Agent header of every
request in the form of App/{AppId}.
"AWS_SIGV4A_SIGNING_REGION_SET"
A comma-delimited list of regions to sign when signing with SigV4a.
For more information, see the "sigv4a_signing_region_set"
configuration file section.
"AWS_REQUEST_CHECKSUM_CALCULATION"
Determines when a checksum will be calculated for request payloads.
For more information, see the "request_checksum_calculation"
configuration file section.
"AWS_RESPONSE_CHECKSUM_VALIDATION"
Determines when checksum validation will be performed on response
payloads. For more information, see the
"response_checksum_validation" configuration file section.
Using a configuration file
==========================
Boto3 will also search the "~/.aws/config" file when looking for
configuration values. You can change the location of this file by
setting the "AWS_CONFIG_FILE" environment variable.
This file is an INI-formatted file that contains at least one section:
"[default]". You can create multiple profiles (logical groups of
configuration) by creating sections named "[profile profile-name]". If
your profile name has spaces, you need to surround this value with
quotation marks: "[profile "my profile name"]". The following are all
the config variables supported in the "~/.aws/config" file.
"api_versions"
Specifies the API version to use for a particular AWS service.
The "api_versions" settings are nested configuration values that
require special formatting in the AWS configuration file. If the
values are set by the AWS CLI or programmatically by an SDK, the
formatting is handled automatically. If you set them by manually
editing the AWS configuration file, the following is the required
format. Notice the indentation of each value.
[default]
region = us-east-1
api_versions =
ec2 = 2015-03-01
cloudfront = 2015-09-17
"aws_access_key_id"
The access key to use.
"aws_secret_access_key"
The secret access key to use.
"aws_session_token"
The session token to use. This is typically needed only when using
temporary credentials. Note "aws_security_token" is supported for
backward compatibility.
"ca_bundle"
The CA bundle to use. For more information, see the previous
description of the "AWS_CA_BUNDLE" environment variable.
"credential_process"
Specifies an external command to run to generate or retrieve
authentication credentials. For more information, see Sourcing
credentials with an external process.
"credential_source"
To invoke an AWS service from an Amazon EC2 instance, you can use
an IAM role attached to either an EC2 instance profile or an Amazon
ECS container. In such a scenario, use the "credential_source"
setting to specify where to find the credentials.
The "credential_source" and "source_profile" settings are mutually
exclusive.
The following values are supported.
"Ec2InstanceMetadata"
Use the IAM role attached to the Amazon EC2 instance profile.
"EcsContainer"
Use the IAM role attached to the Amazon ECS container.
"Environment"
Retrieve the credentials from environment variables.
"duration_seconds"
The length of time in seconds of the role session. The value can
range from 900 seconds (15 minutes) to the maximum session duration
setting for the role. The default value is 3600 seconds (one hour).
"external_id"
Unique identifier to pass when making "AssumeRole" calls.
"metadata_service_timeout"
The number of seconds before timing out when retrieving data from
the instance metadata service. For more information, see the
previous documentation on "AWS_METADATA_SERVICE_TIMEOUT".
"metadata_service_num_attempts"
The number of attempts to make before giving up when retrieving
data from the instance metadata service. For more information, see
the previous documentation on "AWS_METADATA_SERVICE_NUM_ATTEMPTS".
"mfa_serial"
Serial number of the Amazon Resource Name (ARN) of a multi-factor
authentication (MFA) device to use when assuming a role.
"parameter_validation"
Disable parameter validation (default is true, parameters are
validated). This is a Boolean value that is either "true" or
"false". Whenever you make an API call using a client, the
parameters you provide are run through a set of validation checks,
including (but not limited to) required parameters provided, type
checking, no unknown parameters, minimum length checks, and so on.
Typically, you should leave parameter validation enabled.
"region"
The default AWS Region to use, for example, "us-west-1" or "us-
west-2". When specifying a Region inline during client
initialization, this property is named "region_name".
"role_arn"
The ARN of the role you want to assume.
"role_session_name"
The role name to use when assuming a role. If this value is not
provided, a session name will be automatically generated.
"web_identity_token_file"
The path to a file that contains an OAuth 2.0 access token or
OpenID Connect ID token that is provided by the identity provider.
The contents of this file will be loaded and passed as the
"WebIdentityToken" argument to the "AssumeRoleWithWebIdentity"
operation.
"s3"
Set Amazon S3-specific configuration data. Typically, these values
do not need to be set.
The "s3" settings are nested configuration values that require
special formatting in the AWS configuration file. If the values are
set by the AWS CLI or programmatically by an SDK, the formatting is
handled automatically. If you set them manually by editing the AWS
configuration file, the following is the required format. Notice
the indentation of each value.
[default]
region = us-east-1
s3 =
addressing_style = path
signature_version = s3v4
* "addressing_style": The S3 addressing style. When necessary, Boto
automatically switches the addressing style to an appropriate
value. The following values are supported.
"auto"
(Default) Attempts to use "virtual", but falls back to
"path" if necessary.
"path"
Bucket name is included in the URI path.
"virtual"
Bucket name is included in the hostname.
* "payload_signing_enabled": Specifies whether to include an
SHA-256 checksum with Amazon Signature Version 4 payloads. Valid
settings are "true" or "false".
For streaming uploads ("UploadPart" and "PutObject") that use
HTTPS and include a "content-md5" header, this setting is
disabled by default.
* "signature_version": The AWS signature version to use when
signing requests. When necessary, Boto automatically switches the
signature version to an appropriate value. The following values
are recognized.
"s3v4"
(Default) Signature Version 4
"s3"
(Deprecated) Signature Version 2
* "use_accelerate_endpoint": Specifies whether to use the Amazon S3
Accelerate endpoint. The bucket must be enabled to use S3
Accelerate. Valid settings are "true" or "false". Default:
"false"
Either "use_accelerate_endpoint" or "use_dualstack_endpoint" can
be enabled, but not both.
* "use_dualstack_endpoint": Specifies whether to direct all Amazon
S3 requests to the dual IPv4/IPv6 endpoint for the configured
Region. Valid settings are "true" or "false". Default: "false"
Either "use_accelerate_endpoint" or "use_dualstack_endpoint" can
be enabled, but not both.
"source_profile"
The profile name that contains credentials to use for the initial
"AssumeRole" call.
The "credential_source" and "source_profile" settings are mutually
exclusive.
"sts_regional_endpoints"
Sets AWS STS endpoint resolution logic. This configuration can also
be set using the environment variable "AWS_STS_REGIONAL_ENDPOINTS".
By default, this configuration option is set to "regional". Valid
values are the following:
* "regional"
Uses the STS endpoint that corresponds to the configured
Region. For example, if the client is configured to use "us-
west-2", all calls to STS will be made to the "sts.us-
west-2.amazonaws.com" regional endpoint instead of the global
"sts.amazonaws.com" endpoint.
* "legacy"
Uses the global STS endpoint, "sts.amazonaws.com", for the
following configured Regions:
* "ap-northeast-1"
* "ap-south-1"
* "ap-southeast-1"
* "ap-southeast-2"
* "aws-global"
* "ca-central-1"
* "eu-central-1"
* "eu-north-1"
* "eu-west-1"
* "eu-west-2"
* "eu-west-3"
* "sa-east-1"
* "us-east-1"
* "us-east-2"
* "us-west-1"
* "us-west-2"
All other Regions will use their respective regional endpoint.
"tcp_keepalive"
Toggles the TCP Keep-Alive socket option used when creating
connections. By default this value is "false"; TCP Keepalive will
not be used when creating connections. To enable TCP Keepalive with
the system default configurations, set this value to "true".
"max_attempts"
An integer representing the maximum number of attempts that will be
made for a single request, including the initial attempt. For
example, setting this value to 5 will result in a request being
retried up to 4 times. If not provided, the number of retries will
default to whatever is modeled, which is typically 5 total attempts
in the "legacy" retry mode, and 3 in the "standard" and "adaptive"
retry modes.
"retry_mode"
A string representing the type of retries Boto3 will perform.
Valid values are the following:
* "legacy" - The preexisting retry behavior. This is the
default value if no retry mode is provided.
* "standard" - A standardized set of retry rules across the AWS
SDKs. This includes a standard set of errors that are retried
and support for retry quotas, which limit the number of
unsuccessful retries an SDK can make. This mode will default
the maximum number of attempts to 3 unless a "max_attempts" is
explicitly provided.
* "adaptive" - An experimental retry mode that includes all the
functionality of "standard" mode with automatic client-side
throttling. This is a provisional mode whose behavior might
change.
"sigv4a_signing_region_set"
A comma-delimited list of regions use when signing with SigV4a. If
this is not set, the SDK will check if the service has modeled a
default; if none is found, this will default to "*".
"request_checksum_calculation"
Determines when a checksum will be calculated for request payloads.
Valid values are:
* "when_supported" -- When set, a checksum will be calculated for
all request payloads of operations modeled with the
"httpChecksum" trait where "requestChecksumRequired" is "true" or
a "requestAlgorithmMember" is modeled.
* "when_required" -- When set, a checksum will only be calculated
for request payloads of operations modeled with the
"httpChecksum" trait where "requestChecksumRequired" is "true" or
where a "requestAlgorithmMember" is modeled and supplied.
"response_checksum_validation"
Determines when checksum validation will be performed on response
payloads. Valid values are:
* "when_supported" -- When set, checksum validation is performed on
all response payloads of operations modeled with the
"httpChecksum" trait where "responseAlgorithms" is modeled,
except when no modeled checksum algorithms are supported.
* "when_required" -- When set, checksum validation is not performed
on response payloads of operations unless the checksum algorithm
is supported and the "requestValidationModeMember" member is set
to "ENABLED".
"use_dualstack_endpoint"
When "true", dualstack endpoint resolution is enabled. Valid
values are "true" or "false". Default : "false".
Using Account ID-Based Endpoints
================================
Boto3 supports account ID-based endpoints, which improve performance
and scalability by using your AWS account ID to streamline request
routing for services that support this feature. When Boto3 resolves
credentials containing an account ID, it automatically constructs an
account ID-based endpoint instead of a regional endpoint.
Account ID-based endpoints follow this format:
https://.myservice..amazonaws.com
* "" is the AWS account ID sourced from your credentials.
* "" is the AWS region where the request is being made.
Supported Credential Providers
------------------------------
Boto3 can automatically construct account ID-based endpoints by
sourcing the AWS account ID from the following places:
* Credentials set using the "boto3.client()" method
* Credentials set when creating a "Session" object
* Environment variables
* Assume role provider
* Assume role with web identity provider
* AWS IAM Identity Center credential provider
* Shared credential file ("~/.aws/credentials")
* AWS config file ("~/.aws/config")
* Container credential provider
You can read more about these locations in the Credentials guide.
Configuring Account ID
----------------------
You can provide an account ID along with your AWS credentials using
one of the following:
Passing it as a parameter when creating clients:
import boto3
client = boto3.client(
'dynamodb',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_account_id=ACCOUNT_ID
)
Passing it as a parameter when creating a "Session" object:
import boto3
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_account_id=ACCOUNT_ID
)
Setting an environment variable:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_ACCOUNT_ID=
Setting it in the shared credentials or config file:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
aws_account_id=baz
Configuring Endpoint Routing Behavior
-------------------------------------
The account ID endpoint mode is a setting that can be used to turn off
account ID-based endpoint routing if necessary.
Valid values are:
* "preferred" – The endpoint should include account ID if available.
* "disabled" – A resolved endpoint doesn't include account ID.
* "required" – The endpoint must include account ID. If the account ID
isn't available, the SDK throws an error.
Note:
The default behavior in Boto3 is "preferred".
You can configure the setting using one of the following:
Setting it in the "Config" object when creating clients:
import boto3
from botocore.config import Config
my_config = Config(
account_id_endpoint_mode = 'disabled'
)
client = boto3.client('dynamodb', config=my_config)
Setting an environment variable:
export AWS_ACCOUNT_ID_ENDPOINT_MODE=disabled
Setting it in the shared credentials or config file:
[default]
account_id_endpoint_mode=disabled
Getting metrics from Amazon CloudWatch
**************************************
This Python example shows you how to:
* Get a list of published CloudWatch metrics
* Publish data points to CloudWatch metrics
The scenario
============
Metrics are data about the performance of your systems. You can enable
detailed monitoring of some resources, such as your Amazon CloudWatch
instances, or your own application metrics.
In this example, Python code is used to get and send CloudWatch
metrics data. The code uses the AWS SDK for Python to get metrics from
CloudWatch using these methods of the CloudWatch client class:
* paginate('list_metrics').
* put_metric_data.
For more information about CloudWatch metrics, see Using Amazon
CloudWatch Metrics in the *Amazon CloudWatch User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
List metrics
============
List the metric alarm events uploaded to CloudWatch Logs.
The example below shows how to:
* List metric alarms of incoming log events using
paginate('list_metrics').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# List metrics through the pagination interface
paginator = cloudwatch.get_paginator('list_metrics')
for response in paginator.paginate(Dimensions=[{'Name': 'LogGroupName'}],
MetricName='IncomingLogEvents',
Namespace='AWS/Logs'):
print(response['Metrics'])
Publish custom metrics
======================
Publish metric data points to Amazon CloudWatch. Amazon CloudWatch
associates the data points with the specified metric. If the specified
metric does not exist, Amazon CloudWatch creates the metric. When
Amazon CloudWatch creates a metric, it can take up to fifteen minutes
for the metric to appear in calls to ListMetrics.
The example below shows how to:
* Publish custom metrics using put_metric_data.
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Put custom metrics
cloudwatch.put_metric_data(
MetricData=[
{
'MetricName': 'PAGES_VISITED',
'Dimensions': [
{
'Name': 'UNIQUE_PAGES',
'Value': 'URLS'
},
],
'Unit': 'None',
'Value': 1.0
},
],
Namespace='SITE/TRAFFIC'
)
Using subscription filters in Amazon CloudWatch Logs
****************************************************
This Python example shows you how to create and delete filters for log
events in CloudWatch Logs.
The scenario
============
Subscriptions provide access to a real-time feed of log events from
CloudWatch Logs and deliver that feed to other services, such as an
Amazon Kinesis stream or AWS Lambda, for custom processing, analysis,
or loading to other systems. A subscription filter defines the pattern
to use for filtering which log events are delivered to your AWS
resource.
In this example, Python code is used to list, create, and delete a
subscription filter in CloudWatch Logs. The destination for the log
events is a Lambda function. The code uses the AWS SDK for Python to
manage subscription filters using these methods of the CloudWatchLogs
client class:
* get_paginator('describe_subscription_filters').
* put_subscription_filter.
* delete_subscription_filter.
For more information about CloudWatch Logs subscriptions, see Real-
time Processing of Log Data with Subscriptions in the Amazon
CloudWatch Logs User Guide.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
* Configure your AWS credentials, as described in Quickstart.
* Create a Lambda function as the destination for log events. You will
need to use the ARN of this function. For more information about
setting up a Lambda function, see Subscription Filters with AWS
Lambda in the *Amazon CloudWatch Logs User Guide*.
* Create an IAM role whose policy grants permission to invoke the
Lambda function you created and grants full access to CloudWatch
Logs or apply the following policy to the execution role you create
for the Lambda function. For more information about creating an IAM
role, see Creating a Role to Delegate Permissions to an AWS Service
in the *IAM User Guide*.
Use the following role policy when creating the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
}
]
}
List existing subscription filters
==================================
List the subscription filters for the specified log group.
The example below shows how to:
* List subscription filters using
get_paginator('describe_subscription_filters').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create CloudWatchLogs client
cloudwatch_logs = boto3.client('logs')
# List subscription filters through the pagination interface
paginator = cloudwatch_logs.get_paginator('describe_subscription_filters')
for response in paginator.paginate(logGroupName='GROUP_NAME'):
print(response['subscriptionFilters'])
Create a subscription filter
============================
Create or update a subscription filter and associate it with the
specified log group.
The example below shows how to:
* Create a subscription filter using put_subscription_filter.
Example
-------
import boto3
# Create CloudWatchLogs client
cloudwatch_logs = boto3.client('logs')
# Create a subscription filter
cloudwatch_logs.put_subscription_filter(
destinationArn='LAMBDA_FUNCTION_ARN',
filterName='FILTER_NAME',
filterPattern='ERROR',
logGroupName='LOG_GROUP',
)
Delete a subscription filter
============================
The example below shows how to:
* Delete a subscription filter. using delete_subscription_filter.
Example
-------
import boto3
# Create CloudWatchLogs client
cloudwatch_logs = boto3.client('logs')
# Delete a subscription filter
cloudwatch_logs.delete_subscription_filter(
filterName='FILTER_NAME',
logGroupName='LOG_GROUP',
)
Amazon CloudWatch examples
**************************
You can use the following examples to access Amazon Cloudwatch
(CloudWatch) by using Amazon Boto. For more information about
CloudWatch, see the CloudWatch Developer Guide.
**Examples**
* Creating alarms in Amazon CloudWatch
* Using alarm actions in Amazon CloudWatch
* Getting metrics from Amazon CloudWatch
* Sending events to Amazon CloudWatch Events
* Using subscription filters in Amazon CloudWatch Logs
Creating custom email templates with Amazon SES
***********************************************
Amazon Simple Email Service (SES) enables you to send emails that are
personalized for each recipient by using templates. Templates include
a subject line and the text and HTML parts of the email body. The
subject and body sections can also contain unique values that are
personalized for each recipient.
For more information, see Sending Personalized Email Using the Amazon
SES API.
The following examples show how to:
* Create an email template using create_template().
* List all email templates using list_templates().
* Retrieve an email template using get_template().
* Update an email template using update_template().
* Remove an email template using delete_template().
* Send a templated email using send_templated_email().
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Configure your AWS credentials, as described in Quickstart.
Create an email template
========================
To create a template to send personalized email messages, use the
CreateTemplate operation. The template can be used by any account
authorized to send messages in the AWS Region to which the template is
added.
Note:
SES doesn't validate your HTML, so be sure that "HtmlPart" is valid
before sending an email.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.create_template(
Template = {
'TemplateName' : 'TEMPLATE_NAME',
'SubjectPart' : 'SUBJECT_LINE',
'TextPart' : 'TEXT_CONTENT',
'HtmlPart' : 'HTML_CONTENT'
}
)
print(response)
Get an email template
=====================
To view the content for an existing email template including the
subject line, HTML body, and plain text, use the GetTemplate
operation. Only TemplateName is required.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.get_template(
TemplateName = 'TEMPLATE_NAME'
)
print(response)
List all email templates
========================
To retrieve a list of all email templates that are associated with
your AWS account in the current AWS Region, use the ListTemplates
operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.list_templates(
MaxItems=10
)
print(response)
Update an email template
========================
To change the content for a specific email template including the
subject line, HTML body, and plain text, use the UpdateTemplate
operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.update_template(
Template={
'TemplateName': 'TEMPLATE_NAME',
'SubjectPart' : 'SUBJECT_LINE',
'TextPart' : 'TEXT_CONTENT',
'HtmlPart' : 'HTML_CONTENT'
}
)
print(response)
Send an email with a template
=============================
To use a template to send an email to recipients, use the
SendTemplatedEmail operation.
Example
-------
import boto3
# Create SES client
ses = boto3.client('ses')
response = ses.send_templated_email(
Source='EMAIL_ADDRESS',
Destination={
'ToAddresses': [
'EMAIL_ADDRESS',
],
'CcAddresses': [
'EMAIL_ADDRESS',
]
},
ReplyToAddresses=[
'EMAIL_ADDRESS',
],
Template='TEMPLATE_NAME',
TemplateData='{ \"REPLACEMENT_TAG_NAME\":\"REPLACEMENT_VALUE\" }'
)
print(response)
Amazon DynamoDB
***************
By following this guide, you will learn how to use the
"DynamoDB.ServiceResource" and "DynamoDB.Table" resources in order to
create tables, write items to tables, modify existing items, retrieve
items, and query/filter the items in the table.
Creating a new table
====================
In order to create a new table, use the
"DynamoDB.ServiceResource.create_table()" method:
import boto3
# Get the service resource.
dynamodb = boto3.resource('dynamodb')
# Create the DynamoDB table.
table = dynamodb.create_table(
TableName='users',
KeySchema=[
{
'AttributeName': 'username',
'KeyType': 'HASH'
},
{
'AttributeName': 'last_name',
'KeyType': 'RANGE'
}
],
AttributeDefinitions=[
{
'AttributeName': 'username',
'AttributeType': 'S'
},
{
'AttributeName': 'last_name',
'AttributeType': 'S'
},
],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
}
)
# Wait until the table exists.
table.wait_until_exists()
# Print out some data about the table.
print(table.item_count)
Expected output:
0
This creates a table named "users" that respectively has the hash and
range primary keys "username" and "last_name". This method will return
a "DynamoDB.Table" resource to call additional methods on the created
table.
Using an existing table
=======================
It is also possible to create a "DynamoDB.Table" resource from an
existing table:
import boto3
# Get the service resource.
dynamodb = boto3.resource('dynamodb')
# Instantiate a table resource object without actually
# creating a DynamoDB table. Note that the attributes of this table
# are lazy-loaded: a request is not made nor are the attribute
# values populated until the attributes
# on the table resource are accessed or its load() method is called.
table = dynamodb.Table('users')
# Print out some data about the table.
# This will cause a request to be made to DynamoDB and its attribute
# values will be set based on the response.
print(table.creation_date_time)
Expected output (Please note that the actual times will probably not
match up):
2015-06-26 12:42:45.149000-07:00
Creating a new item
===================
Once you have a "DynamoDB.Table" resource you can add new items to the
table using "DynamoDB.Table.put_item()":
table.put_item(
Item={
'username': 'janedoe',
'first_name': 'Jane',
'last_name': 'Doe',
'age': 25,
'account_type': 'standard_user',
}
)
For all of the valid types that can be used for an item, refer to
Valid DynamoDB types.
Getting an item
===============
You can then retrieve the object using "DynamoDB.Table.get_item()":
response = table.get_item(
Key={
'username': 'janedoe',
'last_name': 'Doe'
}
)
item = response['Item']
print(item)
Expected output:
{u'username': u'janedoe',
u'first_name': u'Jane',
u'last_name': u'Doe',
u'account_type': u'standard_user',
u'age': Decimal('25')}
Updating an item
================
You can then update attributes of the item in the table using
"DynamoDB.Table.update_item()":
table.update_item(
Key={
'username': 'janedoe',
'last_name': 'Doe'
},
UpdateExpression='SET age = :val1',
ExpressionAttributeValues={
':val1': 26
}
)
Then if you retrieve the item again, it will be updated appropriately:
response = table.get_item(
Key={
'username': 'janedoe',
'last_name': 'Doe'
}
)
item = response['Item']
print(item)
Expected output:
{u'username': u'janedoe',
u'first_name': u'Jane',
u'last_name': u'Doe',
u'account_type': u'standard_user',
u'age': Decimal('26')}
Deleting an item
================
You can also delete the item using "DynamoDB.Table.delete_item()":
table.delete_item(
Key={
'username': 'janedoe',
'last_name': 'Doe'
}
)
Batch writing
=============
If you are loading a lot of data at a time, you can make use of
"DynamoDB.Table.batch_writer()" so you can both speed up the process
and reduce the number of write requests made to the service.
This method returns a handle to a batch writer object that will
automatically handle buffering and sending items in batches. In
addition, the batch writer will also automatically handle any
unprocessed items and resend them as needed. All you need to do is
call "put_item" for any items you want to add, and "delete_item" for
any items you want to delete:
with table.batch_writer() as batch:
batch.put_item(
Item={
'account_type': 'standard_user',
'username': 'johndoe',
'first_name': 'John',
'last_name': 'Doe',
'age': 25,
'address': {
'road': '1 Jefferson Street',
'city': 'Los Angeles',
'state': 'CA',
'zipcode': 90001
}
}
)
batch.put_item(
Item={
'account_type': 'super_user',
'username': 'janedoering',
'first_name': 'Jane',
'last_name': 'Doering',
'age': 40,
'address': {
'road': '2 Washington Avenue',
'city': 'Seattle',
'state': 'WA',
'zipcode': 98109
}
}
)
batch.put_item(
Item={
'account_type': 'standard_user',
'username': 'bobsmith',
'first_name': 'Bob',
'last_name': 'Smith',
'age': 18,
'address': {
'road': '3 Madison Lane',
'city': 'Louisville',
'state': 'KY',
'zipcode': 40213
}
}
)
batch.put_item(
Item={
'account_type': 'super_user',
'username': 'alicedoe',
'first_name': 'Alice',
'last_name': 'Doe',
'age': 27,
'address': {
'road': '1 Jefferson Street',
'city': 'Los Angeles',
'state': 'CA',
'zipcode': 90001
}
}
)
The batch writer is even able to handle a very large amount of writes
to the table.
with table.batch_writer() as batch:
for i in range(50):
batch.put_item(
Item={
'account_type': 'anonymous',
'username': 'user' + str(i),
'first_name': 'unknown',
'last_name': 'unknown'
}
)
The batch writer can help to de-duplicate request by specifying
"overwrite_by_pkeys=['partition_key', 'sort_key']" if you want to
bypass no duplication limitation of single batch write request as
"botocore.exceptions.ClientError: An error occurred
(ValidationException) when calling the BatchWriteItem operation:
Provided list of item keys contains duplicates".
It will drop request items in the buffer if their primary
keys(composite) values are the same as newly added one, as eventually
consistent with streams of individual put/delete operations on the
same item.
with table.batch_writer(overwrite_by_pkeys=['partition_key', 'sort_key']) as batch:
batch.put_item(
Item={
'partition_key': 'p1',
'sort_key': 's1',
'other': '111',
}
)
batch.put_item(
Item={
'partition_key': 'p1',
'sort_key': 's1',
'other': '222',
}
)
batch.delete_item(
Key={
'partition_key': 'p1',
'sort_key': 's2'
}
)
batch.put_item(
Item={
'partition_key': 'p1',
'sort_key': 's2',
'other': '444',
}
)
after de-duplicate:
batch.put_item(
Item={
'partition_key': 'p1',
'sort_key': 's1',
'other': '222',
}
)
batch.put_item(
Item={
'partition_key': 'p1',
'sort_key': 's2',
'other': '444',
}
)
Querying and scanning
=====================
With the table full of items, you can then query or scan the items in
the table using the "DynamoDB.Table.query()" or
"DynamoDB.Table.scan()" methods respectively. To add conditions to
scanning and querying the table, you will need to import the
"boto3.dynamodb.conditions.Key" and "boto3.dynamodb.conditions.Attr"
classes. The "boto3.dynamodb.conditions.Key" should be used when the
condition is related to the key of the item. The
"boto3.dynamodb.conditions.Attr" should be used when the condition is
related to an attribute of the item:
from boto3.dynamodb.conditions import Key, Attr
This queries for all of the users whose "username" key equals
"johndoe":
response = table.query(
KeyConditionExpression=Key('username').eq('johndoe')
)
items = response['Items']
print(items)
Expected output:
[{u'username': u'johndoe',
u'first_name': u'John',
u'last_name': u'Doe',
u'account_type': u'standard_user',
u'age': Decimal('25'),
u'address': {u'city': u'Los Angeles',
u'state': u'CA',
u'zipcode': Decimal('90001'),
u'road': u'1 Jefferson Street'}}]
Similarly you can scan the table based on attributes of the items. For
example, this scans for all the users whose "age" is less than "27":
response = table.scan(
FilterExpression=Attr('age').lt(27)
)
items = response['Items']
print(items)
Expected output:
[{u'username': u'johndoe',
u'first_name': u'John',
u'last_name': u'Doe',
u'account_type': u'standard_user',
u'age': Decimal('25'),
u'address': {u'city': u'Los Angeles',
u'state': u'CA',
u'zipcode': Decimal('90001'),
u'road': u'1 Jefferson Street'}},
{u'username': u'bobsmith',
u'first_name': u'Bob',
u'last_name': u'Smith',
u'account_type': u'standard_user',
u'age': Decimal('18'),
u'address': {u'city': u'Louisville',
u'state': u'KY',
u'zipcode': Decimal('40213'),
u'road': u'3 Madison Lane'}}]
You are also able to chain conditions together using the logical
operators: "&" (and), "|" (or), and "~" (not). For example, this scans
for all users whose "first_name" starts with "J" and whose
"account_type" is "super_user":
response = table.scan(
FilterExpression=Attr('first_name').begins_with('J') & Attr('account_type').eq('super_user')
)
items = response['Items']
print(items)
Expected output:
[{u'username': u'janedoering',
u'first_name': u'Jane',
u'last_name': u'Doering',
u'account_type': u'super_user',
u'age': Decimal('40'),
u'address': {u'city': u'Seattle',
u'state': u'WA',
u'zipcode': Decimal('98109'),
u'road': u'2 Washington Avenue'}}]
You can even scan based on conditions of a nested attribute. For
example this scans for all users whose "state" in their "address" is
"CA":
response = table.scan(
FilterExpression=Attr('address.state').eq('CA')
)
items = response['Items']
print(items)
Expected output:
[{u'username': u'johndoe',
u'first_name': u'John',
u'last_name': u'Doe',
u'account_type': u'standard_user',
u'age': Decimal('25'),
u'address': {u'city': u'Los Angeles',
u'state': u'CA',
u'zipcode': Decimal('90001'),
u'road': u'1 Jefferson Street'}},
{u'username': u'alicedoe',
u'first_name': u'Alice',
u'last_name': u'Doe',
u'account_type': u'super_user',
u'age': Decimal('27'),
u'address': {u'city': u'Los Angeles',
u'state': u'CA',
u'zipcode': Decimal('90001'),
u'road': u'1 Jefferson Street'}}]
For more information on the various conditions you can use for queries
and scans, refer to DynamoDB conditions.
Deleting a table
================
Finally, if you want to delete your table call
"DynamoDB.Table.delete()":
table.delete()
Access permissions
******************
This section demonstrates how to manage the access permissions for an
S3 bucket or object by using an access control list (ACL).
Get a bucket access control list
================================
The example retrieves the current access control list of an S3 bucket.
import boto3
# Retrieve a bucket's ACL
s3 = boto3.client('s3')
result = s3.get_bucket_acl(Bucket='amzn-s3-demo-bucket')
print(result)
Creating alarms in Amazon CloudWatch
************************************
This Python example shows you how to:
* Get basic information about your CloudWatch alarms
* Create and delete a CloudWatch alarm
The scenario
============
An alarm watches a single metric over a time period you specify, and
performs one or more actions based on the value of the metric relative
to a given threshold over a number of time periods.
In this example, Python code is used to create alarms in CloudWatch.
The code uses the AWS SDK for Python to create alarms using these
methods of the AWS.CloudWatch client class:
* paginate(StateValue='INSUFFICIENT_DATA').
* put_metric_alarm.
* delete_alarms.
For more information about CloudWatch alarms, see Creating Amazon
CloudWatch Alarms in the *Amazon CloudWatch User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe alarms
===============
The example below shows how to:
* List metric alarms for insufficient data using
paginate(StateValue='INSUFFICIENT_DATA').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# List alarms of insufficient data through the pagination interface
paginator = cloudwatch.get_paginator('describe_alarms')
for response in paginator.paginate(StateValue='INSUFFICIENT_DATA'):
print(response['MetricAlarms'])
Create an alarm for a CloudWatch Metric alarm
=============================================
Create or update an alarm and associate it with the specified metric
alarm. Optionally, this operation can associate one or more Amazon SNS
resources with the alarm.
When this operation creates an alarm, the alarm state is immediately
set to "INSUFFICIENT_DATA". The alarm is evaluated and its state is
set appropriately. Any actions associated with the state are then
executed.
When you update an existing alarm, its state is left unchanged, but
the update completely overwrites the previous configuration of the
alarm.
The example below shows how to:
* Create or update a metric alarm using put_metric_alarm.
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Create alarm
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'INSTANCE_ID'
},
],
Unit='Seconds'
)
Delete an alarm
===============
Delete the specified alarms. In the event of an error, no alarms are
deleted.
The example below shows how to:
* Delete a metric alarm using delete_alarms.
Example
-------
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Delete alarm
cloudwatch.delete_alarms(
AlarmNames=['Web_Server_CPU_Utilization'],
)
Using dead-letter queues in Amazon SQS
**************************************
This Python example shows you how to use a queue to receive and hold
messages from other queues that the queues can't process.
The scenario
============
A dead letter queue is one that other (source) queues can target for
messages that can't be processed successfully. You can set aside and
isolate these messages in the dead letter queue to determine why their
processing did not succeed. You must individually configure each
source queue that sends messages to a dead letter queue. Multiple
queues can target a single dead letter queue.
In this example, Python code is used to route messages to a dead
letter queue. The code uses the SDK for Python to use dead letter
queues using this method of the AWS.SQS client class:
* set_queue_attributes.
For more information about Amazon SQS dead letter queues, see Using
Amazon SQS Dead Letter Queues in the *Amazon Simple Queue Service
Developer Guide*.
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Create an Amazon SQS queue to serve as a dead letter queue. For an
example of creating an Amazon SQS queue, see Create a queue.
Configure source queues
=======================
After you create a queue to act as a dead letter queue, you must
configure the other queues that route unprocessed messages to the dead
letter queue. To do this, specify a redrive policy that identifies the
queue to use as a dead letter queue and the maximum number of receives
by individual messages before they are routed to the dead letter
queue.
The example below shows how to:
* Configure a source queue using set_queue_attributes.
Example
-------
import json
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SOURCE_QUEUE_URL'
dead_letter_queue_arn = 'DEAD_LETTER_QUEUE_ARN'
redrive_policy = {
'deadLetterTargetArn': dead_letter_queue_arn,
'maxReceiveCount': '10'
}
# Configure queue to send messages to dead letter queue
sqs.set_queue_attributes(
QueueUrl=queue_url,
Attributes={
'RedrivePolicy': json.dumps(redrive_policy)
}
)
Amazon EC2 examples
*******************
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that
provides resizeable computing capacity in servers in Amazon's data
centers—that you use to build and host your software systems.
You can use the following examples to access Amazon EC2 using the
Amazon Web Services (AWS) SDK for Python. For more information about
Amazon EC2, see the Amazon EC2 Documentation.
**Examples**
* Managing Amazon EC2 instances
* Working with Amazon EC2 key pairs
* Describe Amazon EC2 Regions and Availability Zones
* Working with security groups in Amazon EC2
* Using Elastic IP addresses in Amazon EC2
Using Elastic IP addresses in Amazon EC2
****************************************
This Python example shows you how to:
* Get descriptions of your Elastic IP addresses
* Allocate an Elastic IP address
* Release an Elastic IP address
The scenario
============
An Elastic IP address is a static IP address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. It is a public IP address, which is reachable from the
Internet. If your instance does not have a public IP address, you can
associate an Elastic IP address with your instance to enable
communication with the Internet.
In this example, Python code performs several Amazon EC2 operations
involving Elastic IP addresses. The code uses the AWS SDK for Python
to manage IAM access keys using these methods of the EC2 client class:
* describe_addresses.
* allocate_address.
* release_address.
For more information about Elastic IP addresses in Amazon EC2, see
Elastic IP Addresses in the *Amazon EC2 User Guide for Linux
Instances* or Elastic IP Addresses in the *Amazon EC2 User Guide for
Windows Instances*. Prerequisite Tasks
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe Elastic IP addresses
=============================
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
The example below shows how to:
* Describe Elastic IP addresses using describe_addresses.
Example
-------
import boto3
ec2 = boto3.client('ec2')
filters = [
{'Name': 'domain', 'Values': ['vpc']}
]
response = ec2.describe_addresses(Filters=filters)
print(response)
Allocate and associate an Elastic IP address with an Amazon EC2 instance
========================================================================
An *Elastic IP address* is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
The example below shows how to:
* Acquire an Elastic IP address using allocate_address.
Example
-------
import boto3
from botocore.exceptions import ClientError
ec2 = boto3.client('ec2')
try:
allocation = ec2.allocate_address(Domain='vpc')
response = ec2.associate_address(AllocationId=allocation['AllocationId'],
InstanceId='INSTANCE_ID')
print(response)
except ClientError as e:
print(e)
Release an Elastic IP address
=============================
After releasing an Elastic IP address, it is released to the IP
address pool and might be unavailable to you. Be sure to update your
DNS records and any servers or devices that communicate with the
address. If you attempt to release an Elastic IP address that you
already released, you'll get an "AuthFailure" error if the address is
already allocated to another AWS account.
The example below shows how to:
* Release the specified Elastic IP address using release_address.
Example
-------
import boto3
from botocore.exceptions import ClientError
ec2 = boto3.client('ec2')
try:
response = ec2.release_address(AllocationId='ALLOCATION_ID')
print('Address released')
except ClientError as e:
print(e)
Bucket CORS configuration
*************************
Cross Origin Resource Sharing (CORS) enables client web applications
in one domain to access resources in another domain. An S3 bucket can
be configured to enable cross-origin requests. The configuration
defines rules that specify the allowed origins, HTTP methods (GET,
PUT, etc.), and other elements.
Retrieve a bucket CORS configuration
====================================
Retrieve a bucket's CORS configuration by calling the AWS SDK for
Python "get_bucket_cors" method.
import logging
import boto3
from botocore.exceptions import ClientError
def get_bucket_cors(bucket_name):
"""Retrieve the CORS configuration rules of an Amazon S3 bucket
:param bucket_name: string
:return: List of the bucket's CORS configuration rules. If no CORS
configuration exists, return empty list. If error, return None.
"""
# Retrieve the CORS configuration
s3 = boto3.client('s3')
try:
response = s3.get_bucket_cors(Bucket=bucket_name)
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchCORSConfiguration':
return []
else:
# AllAccessDisabled error == bucket not found
logging.error(e)
return None
return response['CORSRules']
Set a bucket CORS configuration
===============================
A bucket's CORS configuration can be set by calling the
"put_bucket_cors" method.
# Define the configuration rules
cors_configuration = {
'CORSRules': [{
'AllowedHeaders': ['Authorization'],
'AllowedMethods': ['GET', 'PUT'],
'AllowedOrigins': ['*'],
'ExposeHeaders': ['ETag', 'x-amz-request-id'],
'MaxAgeSeconds': 3000
}]
}
# Set the CORS configuration
s3 = boto3.client('s3')
s3.put_bucket_cors(Bucket='amzn-s3-demo-bucket',
CORSConfiguration=cors_configuration)
Amazon S3
*********
Boto 2.x contains a number of customizations to make working with
Amazon S3 buckets and keys easy. Boto3 exposes these same objects
through its resources interface in a unified and consistent way.
Creating the connection
=======================
Boto3 has both low-level clients and higher-level resources. For
Amazon S3, the higher-level resources are the most similar to Boto
2.x's "s3" module:
# Boto 2.x
import boto
s3_connection = boto.connect_s3()
# Boto3
import boto3
s3 = boto3.resource('s3')
Creating a bucket
=================
Creating a bucket in Boto 2 and Boto3 is very similar, except that in
Boto3 all action parameters must be passed via keyword arguments and a
bucket configuration must be specified manually:
# Boto 2.x
s3_connection.create_bucket('amzn-s3-demo-bucket')
s3_connection.create_bucket('amzn-s3-demo-bucket', location=Location.USWest)
# Boto3
s3.create_bucket(Bucket='amzn-s3-demo-bucket')
s3.create_bucket(Bucket='amzn-s3-demo-bucket', CreateBucketConfiguration={
'LocationConstraint': 'us-west-1'})
Storing data
============
Storing data from a file, stream, or string is easy:
# Boto 2.x
from boto.s3.key import Key
key = Key('hello.txt')
key.set_contents_from_file('/tmp/hello.txt')
# Boto3
s3.Object('amzn-s3-demo-bucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
Accessing a bucket
==================
Getting a bucket is easy with Boto3's resources, however these do not
automatically validate whether a bucket exists:
# Boto 2.x
bucket = s3_connection.get_bucket('amzn-s3-demo-bucket', validate=False)
exists = s3_connection.lookup('amzn-s3-demo-bucket')
# Boto3
import botocore
bucket = s3.Bucket('amzn-s3-demo-bucket')
exists = True
try:
s3.meta.client.head_bucket(Bucket='amzn-s3-demo-bucket')
except botocore.exceptions.ClientError as e:
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = e.response['Error']['Code']
if error_code == '404':
exists = False
Deleting a bucket
=================
All of the keys in a bucket must be deleted before the bucket itself
can be deleted:
# Boto 2.x
for key in bucket:
key.delete()
bucket.delete()
# Boto3
for key in bucket.objects.all():
key.delete()
bucket.delete()
Iteration of buckets and keys
=============================
Bucket and key objects are no longer iterable, but now provide
collection attributes which can be iterated:
# Boto 2.x
for bucket in s3_connection:
for key in bucket:
print(key.name)
# Boto3
for bucket in s3.buckets.all():
for key in bucket.objects.all():
print(key.key)
Access controls
===============
Getting and setting canned access control values in Boto3 operates on
an "ACL" resource object:
# Boto 2.x
bucket.set_acl('public-read')
key.set_acl('public-read')
# Boto3
bucket.Acl().put(ACL='public-read')
obj.Acl().put(ACL='public-read')
It's also possible to retrieve the policy grant information:
# Boto 2.x
acp = bucket.get_acl()
for grant in acp.acl.grants:
print(grant.display_name, grant.permission)
# Boto3
acl = bucket.Acl()
for grant in acl.grants:
print(grant['Grantee']['DisplayName'], grant['Permission'])
Boto3 lacks the grant shortcut methods present in Boto 2.x, but it is
still fairly simple to add grantees:
# Boto 2.x
bucket.add_email_grant('READ', 'user@domain.tld')
# Boto3
bucket.Acl.put(GrantRead='emailAddress=user@domain.tld')
Key metadata
============
It's possible to set arbitrary metadata on keys:
# Boto 2.x
key.set_metadata('meta1', 'This is my metadata value')
print(key.get_metadata('meta1'))
# Boto3
key.put(Metadata={'meta1': 'This is my metadata value'})
print(key.metadata['meta1'])
Managing CORS configurations
============================
Allows you to manage the cross-origin resource sharing configuration
for S3 buckets:
# Boto 2.x
cors = bucket.get_cors()
config = CORSConfiguration()
config.add_rule('GET', '*')
bucket.set_cors(config)
bucket.delete_cors()
# Boto3
cors = bucket.Cors()
config = {
'CORSRules': [
{
'AllowedMethods': ['GET'],
'AllowedOrigins': ['*']
}
]
}
cors.put(CORSConfiguration=config)
cors.delete()
Amazon S3 examples
******************
Amazon Simple Storage Service (Amazon S3) is an object storage service
that offers scalability, data availability, security, and performance.
This section demonstrates how to use the AWS SDK for Python to access
Amazon S3 services.
**Examples**
* Amazon S3 buckets
* Uploading files
* Downloading files
* File transfer configuration
* Presigned URLs
* Bucket policies
* Access permissions
* Using an Amazon S3 bucket as a static web host
* Bucket CORS configuration
* Multi-Region Access Points
* AWS PrivateLink for Amazon S3
Working with IAM policies
*************************
This Python example shows you how to create and get IAM policies and
attach and detach IAM policies from roles.
The scenario
============
You grant permissions to a user by creating a policy, which is a
document that lists the actions that a user can perform and the
resources those actions can affect. Any actions or resources that are
not explicitly allowed are denied by default. Policies can be created
and attached to users, groups of users, roles assumed by users, and
resources.
In this example, Python code used to manage policies in IAM. The code
uses the Amazon Web Services (AWS) SDK for Python to create and delete
policies as well as attaching and detaching role policies using these
methods of the IAM client class:
* create_policy.
* get_policy.
* attach_role_policy.
* detach_role_policy.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
For more information about IAM policies, see Overview of Access
Management: Permissions and Policies in the IAM User Guide.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Create an IAM policy
====================
Create a new managed policy for your AWS account.
This operation creates a policy version with a version identifier of
"v1" and sets "v1" as the policy's default version. For more
information about policy versions, see Versioning for Managed Policies
in the *IAM User Guide*.
The example below shows how to:
* Create a new managed policy using create_policy.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Example
-------
import json
import boto3
# Create IAM client
iam = boto3.client('iam')
# Create a policy
my_managed_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "RESOURCE_ARN"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Resource": "RESOURCE_ARN"
}
]
}
response = iam.create_policy(
PolicyName='myDynamoDBPolicy',
PolicyDocument=json.dumps(my_managed_policy)
)
print(response)
Get an IAM policy
=================
Get information about the specified managed policy, including the
policy's default version and the total number of IAM users, groups,
and roles to which the policy is attached. To get the list of the
specific users, groups, and roles that the policy is attached to, use
the "list_entities_for_policy" API. This API returns metadata about
the policy. To get the actual policy document for a specific version
of the policy, use "get_policy_version" API.
This API gets information about managed policies. To get information
about an inline policy that is embedded with an IAM user, group, or
role, use the "get_user_policy", "get_group_policy", or
"get_role_policy" API.
The example below shows how to:
* Get information about a managed policy using get_policy.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Get a policy
response = iam.get_policy(
PolicyArn='arn:aws:iam::aws:policy/AWSLambdaExecute'
)
print(response['Policy'])
Attach a managed role policy
============================
When you attach a managed policy to a role, the managed policy becomes
part of the role's permission (access) policy. You cannot use a
managed policy as the role's trust policy. The role's trust policy is
created at the same time as the role, using "create_role". You can
update a role's trust policy using "update_assume_role_policy".
Use this API to attach a managed policy to a role. To embed an inline
policy in a role, use "put_role_policy".
The example below shows how to:
* Attach a managed policy to an IAM role. using attach_role_policy.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Attach a role policy
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess',
RoleName='AmazonDynamoDBFullAccess'
)
Detach a managed role policy
============================
Detach the specified managed policy from the specified role.
A role can also have inline policies embedded with it. To delete an
inline policy, use the "delete_role_policy" API. For information about
policies, see Managed Policies and Inline Policies in the *IAM User
Guide*.
The example below shows how to:
* Detach a managed role policy using detach_role_policy.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Detach a role policy
iam.detach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess',
RoleName='AmazonDynamoDBFullAccess'
)
Using an Amazon S3 bucket as a static web host
**********************************************
An S3 bucket can be configured to host a static website.
Retrieve a website configuration
================================
Retrieve a bucket's website configuration by calling the AWS SDK for
Python "get_bucket_website" method.
import boto3
# Retrieve the website configuration
s3 = boto3.client('s3')
result = s3.get_bucket_website(Bucket='amzn-s3-demo-website-bucket')
Set a website configuration
===========================
A bucket's website configuration can be set by calling the
"put_bucket_website" method.
# Define the website configuration
website_configuration = {
'ErrorDocument': {'Key': 'error.html'},
'IndexDocument': {'Suffix': 'index.html'},
}
# Set the website configuration
s3 = boto3.client('s3')
s3.put_bucket_website(Bucket='amzn-s3-demo-website-bucket',
WebsiteConfiguration=website_configuration)
Delete a website configuration
==============================
A bucket's website configuration can be deleted by calling the
"delete_bucket_website" method.
# Delete the website configuration
s3 = boto3.client('s3')
s3.delete_bucket_website(Bucket='amzn-s3-demo-website-bucket')
Managing visibility timeout in Amazon SQS
*****************************************
This Python example shows you how to specify the time interval during
which messages received by a queue are not visible.
The scenario
============
In this example, Python code is used to manage visibility timeout. The
code uses the SDK for Python to manage visibility timeout by using
this method of the AWS.SQS client class:
* set_queue_attributes.
For more information about Amazon SQS visibility timeout, see
Visibility Timeout in the *Amazon Simple Queue Service Developer
Guide*.
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Create an Amazon SQS queue. For an example of creating an Amazon SQS
queue, see Create a queue.
* Send a message to the queue. For an example of sending a message to
a queue, see Send a message to a queue.
Change the visibility timeout
=============================
The example below shows how to:
* Change the visibility timeout using set_queue_attributes.
Example
======================================================================
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
# Receive message from SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
)
message = response['Messages'][0]
receipt_handle = message['ReceiptHandle']
# Change visibility timeout of message from queue
sqs.change_message_visibility(
QueueUrl=queue_url,
ReceiptHandle=receipt_handle,
VisibilityTimeout=20
)
print('Received and changed visibility timeout of message: %s' % message)
Encrypt and decrypt a file
**************************
The example program uses AWS KMS keys to encrypt and decrypt a file.
A master key, also called a Customer Master Key or CMK, is created and
used to generate a data key. The data key is then used to encrypt a
disk file. The encrypted data key is stored within the encrypted file.
To decrypt the file, the data key is decrypted and then used to
decrypt the rest of the file. This manner of using master and data
keys is called envelope encryption.
To encrypt and decrypt data, the example uses the well-known Python
"cryptography" package. This package is not part of the Python
standard library and must be installed separately, for example, with
the "pip" command.
pip install cryptography
Retrieve an existing master key
===============================
Master keys are created, managed, and stored within AWS KMS. A KMS
master key is also referred to as a customer master key or CMK. An AWS
storage cost is incurred for each CMK, therefore, one CMK is often
used to manage multiple data keys.
The example "retrieve_cmk" function searches for an existing CMK. A
key description is specified when a CMK is created, and this
description is used to identify and retrieve the desired key. If many
CMKs exist, they are processed in batches until either the desired key
is found or all keys are examined.
If the example function finds the desired CMK, it returns both the
CMK's ID and its ARN (Amazon Resource Name). Either of these
identifiers can be used to reference the CMK in subsequent calls to
AWS KMS methods.
def retrieve_cmk(desc):
"""Retrieve an existing KMS CMK based on its description
:param desc: Description of CMK specified when the CMK was created
:return Tuple(KeyId, KeyArn) where:
KeyId: CMK ID
KeyArn: Amazon Resource Name of CMK
:return Tuple(None, None) if a CMK with the specified description was
not found
"""
# Retrieve a list of existing CMKs
# If more than 100 keys exist, retrieve and process them in batches
kms_client = boto3.client('kms')
try:
response = kms_client.list_keys()
except ClientError as e:
logging.error(e)
return None, None
done = False
while not done:
for cmk in response['Keys']:
# Get info about the key, including its description
try:
key_info = kms_client.describe_key(KeyId=cmk['KeyArn'])
except ClientError as e:
logging.error(e)
return None, None
# Is this the key we're looking for?
if key_info['KeyMetadata']['Description'] == desc:
return cmk['KeyId'], cmk['KeyArn']
# Are there more keys to retrieve?
if not response['Truncated']:
# No, the CMK was not found
logging.debug('A CMK with the specified description was not found')
done = True
else:
# Yes, retrieve another batch
try:
response = kms_client.list_keys(Marker=response['NextMarker'])
except ClientError as e:
logging.error(e)
return None, None
# All existing CMKs were checked and the desired key was not found
return None, None
Create a customer master key
============================
If the example does not find an existing CMK, it creates a new one and
returns its ID and ARN.
def create_cmk(desc='Customer Master Key'):
"""Create a KMS Customer Master Key
The created CMK is a Customer-managed key stored in AWS KMS.
:param desc: key description
:return Tuple(KeyId, KeyArn) where:
KeyId: AWS globally-unique string ID
KeyArn: Amazon Resource Name of the CMK
:return Tuple(None, None) if error
"""
# Create CMK
kms_client = boto3.client('kms')
try:
response = kms_client.create_key(Description=desc)
except ClientError as e:
logging.error(e)
return None, None
# Return the key ID and ARN
return response['KeyMetadata']['KeyId'], response['KeyMetadata']['Arn']
Create a data key
=================
To encrypt a file, the example "create_data_key" function creates a
data key. The data key is customer managed and does not incur an AWS
storage cost. The example creates a data key for each file it
encrypts, but it's possible to use a single data key to encrypt
multiple files.
The example function returns the data key in both its plaintext and
encrypted forms. The plaintext form is used to encrypt the data. The
encrypted form will be stored with the encrypted file. The data key is
associated with a CMK which is capable of decrypting the encrypted
data key when necessary.
def create_data_key(cmk_id, key_spec='AES_256'):
"""Generate a data key to use when encrypting and decrypting data
:param cmk_id: KMS CMK ID or ARN under which to generate and encrypt the
data key.
:param key_spec: Length of the data encryption key. Supported values:
'AES_128': Generate a 128-bit symmetric key
'AES_256': Generate a 256-bit symmetric key
:return Tuple(EncryptedDataKey, PlaintextDataKey) where:
EncryptedDataKey: Encrypted CiphertextBlob data key as binary string
PlaintextDataKey: Plaintext base64-encoded data key as binary string
:return Tuple(None, None) if error
"""
# Create data key
kms_client = boto3.client('kms')
try:
response = kms_client.generate_data_key(KeyId=cmk_id, KeySpec=key_spec)
except ClientError as e:
logging.error(e)
return None, None
# Return the encrypted and plaintext data key
return response['CiphertextBlob'], base64.b64encode(response['Plaintext'])
Encrypt a file
==============
The "encrypt_file" function creates a data key and uses it to encrypt
the contents of a disk file.
The encryption operation is performed by a "Fernet" object created by
the Python "cryptography" package.
The encrypted form of the data key is saved within the encrypted file
and will be used in the future to decrypt the file. The encrypted file
can be decrypted by any program with the credentials to decrypt the
encrypted data key.
def encrypt_file(filename, cmk_id):
"""Encrypt a file using an AWS KMS CMK
A data key is generated and associated with the CMK.
The encrypted data key is saved with the encrypted file. This enables the
file to be decrypted at any time in the future and by any program that
has the credentials to decrypt the data key.
The encrypted file is saved to .encrypted
Limitation: The contents of filename must fit in memory.
:param filename: File to encrypt
:param cmk_id: AWS KMS CMK ID or ARN
:return: True if file was encrypted. Otherwise, False.
"""
# Read the entire file into memory
try:
with open(filename, 'rb') as file:
file_contents = file.read()
except IOError as e:
logging.error(e)
return False
# Generate a data key associated with the CMK
# The data key is used to encrypt the file. Each file can use its own
# data key or data keys can be shared among files.
# Specify either the CMK ID or ARN
data_key_encrypted, data_key_plaintext = create_data_key(cmk_id)
if data_key_encrypted is None:
return False
logging.info('Created new AWS KMS data key')
# Encrypt the file
f = Fernet(data_key_plaintext)
file_contents_encrypted = f.encrypt(file_contents)
# Write the encrypted data key and encrypted file contents together
try:
with open(filename + '.encrypted', 'wb') as file_encrypted:
file_encrypted.write(len(data_key_encrypted).to_bytes(NUM_BYTES_FOR_LEN,
byteorder='big'))
file_encrypted.write(data_key_encrypted)
file_encrypted.write(file_contents_encrypted)
except IOError as e:
logging.error(e)
return False
# For the highest security, the data_key_plaintext value should be wiped
# from memory. Unfortunately, this is not possible in Python. However,
# storing the value in a local variable makes it available for garbage
# collection.
return True
Decrypt a data key
==================
To decrypt an encrypted file, the encrypted data key used to perform
the encryption must first be decrypted. This operation is performed by
the example "decrypt_data_key" function which returns the plaintext
form of the key.
def decrypt_data_key(data_key_encrypted):
"""Decrypt an encrypted data key
:param data_key_encrypted: Encrypted ciphertext data key.
:return Plaintext base64-encoded binary data key as binary string
:return None if error
"""
# Decrypt the data key
kms_client = boto3.client('kms')
try:
response = kms_client.decrypt(CiphertextBlob=data_key_encrypted)
except ClientError as e:
logging.error(e)
return None
# Return plaintext base64-encoded binary data key
return base64.b64encode((response['Plaintext']))
Decrypt a file
==============
The example "decrypt_file" function first extracts the encrypted data
key from the encrypted file. It then decrypts the key to get its
plaintext form and uses that to decrypt the file contents.
The decryption operation is performed by a "Fernet" object created by
the Python "cryptography" package.
def decrypt_file(filename):
"""Decrypt a file encrypted by encrypt_file()
The encrypted file is read from .encrypted
The decrypted file is written to .decrypted
:param filename: File to decrypt
:return: True if file was decrypted. Otherwise, False.
"""
# Read the encrypted file into memory
try:
with open(filename + '.encrypted', 'rb') as file:
file_contents = file.read()
except IOError as e:
logging.error(e)
return False
# The first NUM_BYTES_FOR_LEN bytes contain the integer length of the
# encrypted data key.
# Add NUM_BYTES_FOR_LEN to get index of end of encrypted data key/start
# of encrypted data.
data_key_encrypted_len = int.from_bytes(file_contents[:NUM_BYTES_FOR_LEN],
byteorder='big') \
+ NUM_BYTES_FOR_LEN
data_key_encrypted = file_contents[NUM_BYTES_FOR_LEN:data_key_encrypted_len]
# Decrypt the data key before using it
data_key_plaintext = decrypt_data_key(data_key_encrypted)
if data_key_plaintext is None:
return False
# Decrypt the rest of the file
f = Fernet(data_key_plaintext)
file_contents_decrypted = f.decrypt(file_contents[data_key_encrypted_len:])
# Write the decrypted file contents
try:
with open(filename + '.decrypted', 'wb') as file_decrypted:
file_decrypted.write(file_contents_decrypted)
except IOError as e:
logging.error(e)
return False
# The same security issue described at the end of encrypt_file() exists
# here, too, i.e., the wish to wipe the data_key_plaintext value from
# memory.
return True
Security
********
Cloud security at Amazon Web Services (AWS) is the highest priority.
As an AWS customer, you benefit from a data center and network
architecture that is built to meet the requirements of the most
security-sensitive organizations. Security is a shared responsibility
between AWS and you. The Shared Responsibility Model describes this as
Security of the Cloud and Security in the Cloud.
**Security of the Cloud** – AWS is responsible for protecting the
infrastructure that runs all of the services offered in the AWS Cloud
and providing you with services that you can use securely. Our
security responsibility is the highest priority at AWS, and the
effectiveness of our security is regularly tested and verified by
third-party auditors as part of the AWS Compliance Programs.
**Security in the Cloud** – Your responsibility is determined by the
AWS service you are using, and other factors including the sensitivity
of your data, your organization’s requirements, and applicable laws
and regulations.
Boto3 follows the shared responsibility model through the specific AWS
services it supports. For AWS service security information, see the
AWS service security documentation page and AWS services that are in
scope of AWS compliance efforts by compliance program.
Data protection
===============
The AWS shared responsibility model applies to data protection in AWS
SDK for Python (Boto3). As described in this model, AWS is responsible
for protecting the global infrastructure that runs all of the AWS
Cloud. You are responsible for maintaining control over your content
that is hosted on this infrastructure. This content includes the
security configuration and management tasks for the AWS services that
you use. For more information about data privacy, see the Data Privacy
FAQ.
For data protection purposes, we recommend that you protect AWS
account credentials and set up individual user accounts with AWS
Identity and Access Management (IAM), so that each user is given only
the permissions necessary to fulfill their job duties. We also
recommend that you secure your data in the following ways:
* Use multi-factor authentication (MFA) with each account.
* Use SSL/TLS to communicate with AWS resources. To use minimum TLS
version of 1.2, see Enforcing TLS 1.2
* Set up API and user activity logging with AWS CloudTrail.
* Use AWS encryption solutions, along with all default security
controls within AWS services.
* Use advanced managed security services such as Amazon Macie, which
assists in discovering and securing personal data that is stored in
Amazon S3.
We strongly recommend that you never put sensitive identifying
information, such as your customers' account numbers, into free-form
fields such as a **Name** field. This includes when you work with
Boto3 or other AWS services using the console, API, AWS CLI, or AWS
SDKs. Any data that you enter into Boto3 or other services might get
picked up for inclusion in diagnostic logs. When you provide a URL to
an external server, don't include credentials information in the URL
to validate your request to that server.
Identity and access management
==============================
AWS Identity and Access Management (IAM) is an AWS service that helps
an administrator securely control access to AWS resources. IAM
administrators control who can be *authenticated* (signed in) and
*authorized* (have permissions) to use AWS resources. IAM is an AWS
service that you can use at no additional charge. For details about
working with IAM, see AWS Identity and Access Management. We also
strongly recommend reviewing the Security best practices in IAM.
To use Boto3 to access AWS, you need an AWS account and AWS
credentials. For more information on credentials see AWS security
credentials in the AWS General Reference.
Compliance validation
=====================
The security and compliance of AWS services is assessed by third-party
auditors as part of multiple AWS compliance programs. These include
SOC, PCI, FedRAMP, HIPAA, and others. AWS provides a frequently
updated list of AWS services in scope of specific compliance programs
at AWS Services in Scope by Compliance Program.
Third-party audit reports are available for you to download using AWS
Artifact. For more information, see Downloading Reports in AWS
Artifact.
For more information about AWS compliance programs, see AWS Compliance
Programs.
Your compliance responsibility when using Boto3 to access an AWS
service is determined by the sensitivity of your data, your
organization’s compliance objectives, and applicable laws and
regulations. If your use of an AWS service is subject to compliance
with standards such as HIPAA, PCI, or FedRAMP, AWS provides resources
to help:
* Security and Compliance Quick Start Guides – Deployment guides that
discuss architectural considerations and provide steps for deploying
security-focused and compliance-focused baseline environments on
AWS.
* Architecting for HIPAA Security and Compliance Whitepaper – A
whitepaper that describes how companies can use AWS to create HIPAA-
compliant applications.
* AWS Compliance Resources – A collection of workbooks and guides that
might apply to your industry and location.
* AWS Config – A service that assesses how well your resource
configurations comply with internal practices, industry guidelines,
and regulations.
* AWS Security Hub – A comprehensive view of your security state
within AWS that helps you check your compliance with security
industry standards and best practices.
Resilience
==========
The AWS global infrastructure is built around AWS Regions and
Availability Zones.
AWS Regions provide multiple physically separated and isolated
Availability Zones, which are connected with low-latency, high-
throughput, and highly redundant networking.
With Availability Zones, you can design and operate applications and
databases that automatically fail over between Availability Zones
without interruption. Availability Zones are more highly available,
fault tolerant, and scalable than traditional single or multiple data
center infrastructures.
For more information about AWS Regions and Availability Zones, see AWS
Global Infrastructure.
Infrastructure security
=======================
For information about AWS security processes, see the AWS: Overview of
Security Processes whitepaper.
Enforcing TLS 1.2
=================
To ensure the AWS SDK for Python uses no TLS version earlier than TLS
1.2, you might need to recompile OpenSSL to enforce this minimum and
then recompile Python to use the recompiled OpenSSL.
Determining supported protocols
-------------------------------
First, create a self-signed certificate to use for the test server and
the SDK using OpenSSL:
openssl req -subj '/CN=localhost' -x509 -newkey rsa:4096 -nodes -keyout key.pem -out cert.pem -days 365
Then spin up a test server using OpenSSL:
openssl s_server -key key.pem -cert cert.pem -www
In a new terminal window, create a virtual environment and install the
SDK:
python3 -m venv test-env
source test-env/bin/activate
pip install botocore
Create a new Python script called "check.py" that will use the SDK’s
underlying HTTP library:
import urllib3
URL = 'https://localhost:4433/'
http = urllib3.PoolManager(
ca_certs='cert.pem',
cert_reqs='CERT_REQUIRED',
)
r = http.request('GET', URL)
print(r.data.decode('utf-8'))
Run the script:
python check.py
This will give details about the connection made. Search for "Protocol
:" in the output. If the output is "TLSv1.2" or later, the SDK will
default to TLS v1.2 and later. If it's earlier, you need to recompile
OpenSSL and then recompile Python.
However, even if your installation of Python defaults to TLS v1.2 or
later, it's still possible for Python to renegotiate to a version
earlier than TLS v1.2 if the server doesn't support TLS v1.2+. To
check that Python will not automatically renegotiate to these earlier
versions, restart the test server with the following:
openssl s_server -key key.pem -cert cert.pem -no_tls1_3 -no_tls1_2 -www
Note:
If you are using an older version of OpenSSL, you might not have the
"-no_tls_3" flag available. In this case, just remove the flag
because the version of OpenSSL you are using doesn't support TLS
v1.3.
Rerun the Python script:
python check.py
If your installation of Python correctly does not renegotiate for
versions earlier than TLS 1.2, you should receive an SSL error:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='localhost', port=4433): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1108)')))
If you are able to make a connection, you need to recompile OpenSSL
and Python to disable negotiation of protocols earlier than TLS v1.2.
Compile OpenSSL and Python
--------------------------
To ensure the SDK or CLI does not negotiate for anything earlier than
TLS 1.2, you need to recompile OpenSSL and Python. First copy the
following content to create a script and run it:
#!/usr/bin/env bash
set -e
OPENSSL_VERSION="1.1.1m"
OPENSSL_PREFIX="/opt/openssl-with-min-tls1_2"
PYTHON_VERSION="3.9.10"
PYTHON_PREFIX="/opt/python-with-min-tls1_2"
curl -O "https://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz"
tar -xzf "openssl-$OPENSSL_VERSION.tar.gz"
cd openssl-$OPENSSL_VERSION
./config --prefix=$OPENSSL_PREFIX no-ssl3 no-tls1 no-tls1_1 no-shared
make > /dev/null
sudo make install_sw > /dev/null
cd /tmp
curl -O "https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz"
tar -xzf "Python-$PYTHON_VERSION.tgz"
cd Python-$PYTHON_VERSION
./configure --prefix=$PYTHON_PREFIX --with-openssl=$OPENSSL_PREFIX --disable-shared > /dev/null
make > /dev/null
sudo make install > /dev/null
This will compile a version of Python that has a statically linked
OpenSSL that will not automatically negotiate anything earlier than
TLS 1.2. This will also install OpenSSL in the directory: "/opt
/openssl-with-min-tls1_2" and install Python in the directory: "/opt
/python-with-min-tls1_2".
After you run this script, you should be able to use this newly
installed version of Python:
/opt/python-with-min-tls1_2/bin/python3 --version
This should print out:
Python 3.9.10
To confirm this new version of Python does not negotiate a version
earlier than TLS 1.2, rerun the steps from Determining Supported
Protocols using the newly installed Python version (that is, "/opt
/python-with-min-tls1_2/bin/python3").
Enforcing TLS 1.3
=================
Note:
Some AWS Services do not yet support TLS 1.3, configuring this as
your minimum version may affect SDK interoperability. We recommend
testing this change with each service prior to production
deployment.
The process of ensuring the AWS SDK for Python uses no TLS version
earlier than TLS 1.3 is the same as the instructions in the Enforcing
TLS 1.2 section with some minor modifications, primarily adding the
"no-tls1_2" flag to the openssl build configuration.
The following are the modified build instructions:
#!/usr/bin/env bash
set -e
OPENSSL_VERSION="1.1.1m"
OPENSSL_PREFIX="/opt/openssl-with-min-tls1_3"
PYTHON_VERSION="3.9.10"
PYTHON_PREFIX="/opt/python-with-min-tls1_3"
curl -O "https://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz"
tar -xzf "openssl-$OPENSSL_VERSION.tar.gz"
cd openssl-$OPENSSL_VERSION
./config --prefix=$OPENSSL_PREFIX no-ssl3 no-tls1 no-tls1_1 no-tls1_2 no-shared
make > /dev/null
sudo make install_sw > /dev/null
cd /tmp
curl -O "https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz"
tar -xzf "Python-$PYTHON_VERSION.tgz"
cd Python-$PYTHON_VERSION
./configure --prefix=$PYTHON_PREFIX --with-openssl=$OPENSSL_PREFIX --disable-shared > /dev/null
make > /dev/null
sudo make install > /dev/null
Amazon S3 buckets
*****************
An Amazon S3 bucket is a storage location to hold files. S3 files are
referred to as objects.
This section describes how to use the AWS SDK for Python to perform
common operations on S3 buckets.
Create an Amazon S3 bucket
==========================
The name of an Amazon S3 bucket must be unique across all regions of
the AWS platform. The bucket can be located in a specific region to
minimize latency or to address regulatory requirements.
import logging
import boto3
from botocore.exceptions import ClientError
def create_bucket(bucket_name, region=None):
"""Create an S3 bucket in a specified region
If a region is not specified, the bucket is created in the S3 default
region (us-east-1).
:param bucket_name: Bucket to create
:param region: String region to create bucket in, e.g., 'us-west-2'
:return: True if bucket created, else False
"""
# Create bucket
try:
if region is None:
s3_client = boto3.client('s3')
s3_client.create_bucket(Bucket=bucket_name)
else:
s3_client = boto3.client('s3', region_name=region)
location = {'LocationConstraint': region}
s3_client.create_bucket(Bucket=bucket_name,
CreateBucketConfiguration=location)
except ClientError as e:
logging.error(e)
return False
return True
List existing buckets
=====================
List all the existing buckets for the AWS account.
# Retrieve the list of existing buckets
s3 = boto3.client('s3')
response = s3.list_buckets()
# Output the bucket names
print('Existing buckets:')
for bucket in response['Buckets']:
print(f' {bucket["Name"]}')
Collections
***********
Overview
========
A collection provides an iterable interface to a group of resources.
Collections behave similarly to Django QuerySets and expose a similar
API. A collection seamlessly handles pagination for you, making it
possible to easily iterate over all items from all pages of data.
Example of a collection:
# SQS list all queues
sqs = boto3.resource('sqs')
for queue in sqs.queues.all():
print(queue.url)
When collections make requests
------------------------------
Collections can be created and manipulated without any request being
made to the underlying service. A collection makes a remote service
request under the following conditions:
* **Iteration**:
for bucket in s3.buckets.all():
print(bucket.name)
* **Conversion to list()**:
buckets = list(s3.buckets.all())
* **Batch actions (see below)**:
s3.Bucket('amzn-s3-demo-bucket').objects.delete()
Filtering
=========
Some collections support extra arguments to filter the returned data
set, which are passed into the underlying service operation. Use the
"filter()" method to filter the results:
# S3 list all keys with the prefix 'photos/'
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
for obj in bucket.objects.filter(Prefix='photos/'):
print('{0}:{1}'.format(bucket.name, obj.key))
Warning:
Behind the scenes, the above example will call "ListBuckets",
"ListObjects", and "HeadObject" many times. If you have a large
number of S3 objects then this could incur a significant cost.
Chainability
============
Collection methods are chainable. They return copies of the collection
rather than modifying the collection, including a deep copy of any
associated operation parameters. For example, this allows you to build
up multiple collections from a base which they all have in common:
# EC2 find instances
ec2 = boto3.resource('ec2')
base = ec2.instances.filter(InstanceIds=['id1', 'id2', 'id3'])
filters = [{
'Name': 'tenancy',
'Values': ['dedicated']
}]
filtered1 = base.filter(Filters=filters)
# Note, this does NOT modify the filters in ``filtered1``!
filters.append({'name': 'instance-type', 'value': 't1.micro'})
filtered2 = base.filter(Filters=filters)
print('All instances:')
for instance in base:
print(instance.id)
print('Dedicated instances:')
for instance in filtered1:
print(instance.id)
print('Dedicated micro instances:')
for instance in filtered2:
print(instance.id)
Limiting results
================
It is possible to limit the number of items returned from a collection
by using either the "limit()" method:
# S3 iterate over first ten buckets
for bucket in s3.buckets.limit(10):
print(bucket.name)
In both cases, up to 10 items total will be returned. If you do not
have 10 buckets, then all of your buckets will be returned.
Controlling page size
=====================
Collections automatically handle paging through results, but you may
want to control the number of items returned from a single service
operation call. You can do so using the "page_size()" method:
# S3 iterate over all objects 100 at a time
for obj in bucket.objects.page_size(100):
print(obj.key)
By default, S3 will return 1000 objects at a time, so the above code
would let you process the items in smaller batches, which could be
beneficial for slow or unreliable internet connections.
Batch actions
=============
Some collections support batch actions, which are actions that operate
on an entire page of results at a time. They will automatically handle
pagination:
# S3 delete everything in `amzn-s3-demo-bucket`
s3 = boto3.resource('s3')
s3.Bucket('amzn-s3-demo-bucket').objects.delete()
Danger:
The above example will **completely erase all data** in the "amzn-s3
-demo-bucket" bucket! Please be careful with batch actions.
Working with IAM server certificates
************************************
This Python example shows you how to carry out basic tasks in managing
server certificates for HTTPS connections.
The scenario
============
To enable HTTPS connections to your website or application on AWS, you
need an SSL/TLS server certificate. To use a certificate that you
obtained from an external provider with your website or application on
AWS, you must upload the certificate to IAM or import it into AWS
Certificate Manager.
In this example, python code is used to handle server certificates in
IAM. The code uses the Amazon Web Services (AWS) SDK for Python to
manage server certificates using these methods of the IAM client
class:
* get_paginator('list_server_certificates').
* get_server_certificate.
* update_server_certificate.
* delete_server_certificate.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
For more information about server certificates, see Working with
Server Certificates in the *IAM User Guide*.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
List your server certificates
=============================
List the server certificates stored in IAM. If none exist, the action
returns an empty list.
The example below shows how to:
* List server certificates using
get_paginator('list_server_certificates').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# List server certificates through the pagination interface
paginator = iam.get_paginator('list_server_certificates')
for response in paginator.paginate():
print(response['ServerCertificateMetadataList'])
Get a server certificate
========================
Get information about the specified server certificate stored in IAM.
The example below shows how to:
* Get a server certificate using get_server_certificate.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Get the server certificate
response = iam.get_server_certificate(ServerCertificateName='CERTIFICATE_NAME')
print(response['ServerCertificate'])
Update a server certificate
===========================
Update the name and/or the path of the specified server certificate
stored in IAM.
The example below shows how to:
* Update a server certificate using update_server_certificate.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Update the name of the server certificate
iam.update_server_certificate(
ServerCertificateName='CERTIFICATE_NAME',
NewServerCertificateName='NEW_CERTIFICATE_NAME'
)
Delete a server certificate
===========================
Delete the specified server certificate.
The example below shows how to:
* Delete a server certificate using delete_server_certificate.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Delete the server certificate
iam.delete_server_certificate(
ServerCertificateName='CERTIFICATE_NAME'
)
Sending and receiving messages in Amazon SQS
********************************************
This Python example shows you how to send, receive, and delete
messages in a queue.
The scenario
============
In this example, Python code is used to send and receive messages. The
code uses the AWS SDK for Python to send and receive messages by using
these methods of the AWS.SQS client class:
* send_message.
* receive_message.
* delete_message.
For more information about Amazon SQS messages, see Sending a Message
to an Amazon SQS Queue and Receiving and Deleting a Message from an
Amazon SQS Queue in the *Amazon Simple Queue Service Developer Guide*.
Prerequisite tasks
==================
To set up and run this example, you must first complete these tasks:
* Create an Amazon SQS queue. For an example of creating an Amazon SQS
queue, see Create a queue.
Send a message to a queue
=========================
The example below shows how to:
* Send a message to a queue using send_message.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
# Send message to SQS queue
response = sqs.send_message(
QueueUrl=queue_url,
DelaySeconds=10,
MessageAttributes={
'Title': {
'DataType': 'String',
'StringValue': 'The Whistler'
},
'Author': {
'DataType': 'String',
'StringValue': 'John Grisham'
},
'WeeksOn': {
'DataType': 'Number',
'StringValue': '6'
}
},
MessageBody=(
'Information about current NY Times fiction bestseller for '
'week of 12/11/2016.'
)
)
print(response['MessageId'])
Receive and delete messages from a queue
========================================
The example below shows how to:
* Receive a message from a queue using receive_message.
* Delete a message from a queue using delete_message.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
# Receive message from SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=0,
WaitTimeSeconds=0
)
message = response['Messages'][0]
receipt_handle = message['ReceiptHandle']
# Delete received message from queue
sqs.delete_message(
QueueUrl=queue_url,
ReceiptHandle=receipt_handle
)
print('Received and deleted message: %s' % message)
Migrating from Boto 2.x
***********************
Current Boto users can begin using Boto3 right away. The two modules
can live side-by-side in the same project, which means that a
piecemeal approach can be used. New features can be written in Boto3,
or existing code can be migrated over as needed, piece by piece.
High-level concepts
===================
Boto 2.x modules are typically split into two categories, those which
include a high-level object-oriented interface and those which include
only a low-level interface which matches the underlying Amazon Web
Services API. Some modules are completely high-level (like Amazon S3
or EC2), some include high-level code on top of a low-level connection
(like Amazon DynamoDB), and others are 100% low-level (like Amazon
Elastic Transcoder).
In Boto3 this general low-level and high-level concept hasn't changed
much, but there are two important points to understand.
Data driven
-----------
First, in Boto3 classes are created at runtime from JSON data files
that describe AWS APIs and organizational structures built atop of
them. These data files are loaded at runtime and can be modified and
updated without the need of installing an entirely new SDK release.
A side effect of having all the services generated from JSON files is
that there is now consistency between all AWS service modules. One
important change is that *all* API call parameters must now be passed
as **keyword arguments**, and these keyword arguments take the form
defined by the upstream service. Though there are exceptions, this
typically means "UpperCamelCasing" parameter names. You will see this
in the service-specific migration guides linked to below.
Resource objects
----------------
Second, while every service now uses the runtime-generated low-level
client, some services additionally have high-level generated objects
that we refer to as "Resources". The lower-level is comparable to Boto
2.x layer 1 connection objects in that they provide a one to one
mapping of API operations and return low-level responses. The higher
level is comparable to the high-level customizations from Boto 2.x: an
S3 "Key", an EC2 "Instance", and a DynamoDB "Table" are all considered
resources in Boto3. Just like a Boto 2.x "S3Connection"'s
"list_buckets" will return "Bucket" objects, the Boto3 resource
interface provides actions and collections that return resources. Some
services may also have hand-written customizations built on top of the
runtime-generated high-level resources (such as utilities for working
with S3 multipart uploads).
import boto, boto3
# Low-level connections
conn = boto.connect_elastictranscoder()
client = boto3.client('elastictranscoder')
# High-level connections & resource objects
from boto.s3.bucket import Bucket
s3_conn = boto.connect_s3()
boto2_bucket = Bucket('amzn-s3-demo-bucket')
s3 = boto3.resource('s3')
boto3_bucket = s3.Bucket('amzn-s3-demo-bucket')
Installation and configuration
==============================
The Quickstart guide provides instructions for installing Boto3. You
can also follow the instructions there to set up new credential files,
or you can continue to use your existing Boto 2.x credentials. Please
note that Boto3, the AWS CLI, and several other SDKs all use the
shared credentials file (usually at "~/.aws/credentials").
Once configured, you may begin using Boto3:
import boto3
for bucket in boto3.resource('s3').buckets.all():
print(bucket.name)
See the Code Examples and Boto3 Documentation for more information.
The rest of this document will describe specific common usage
scenarios of Boto 2 code and how to accomplish the same tasks with
Boto3.
Services
========
* Amazon S3
* Creating the connection
* Creating a bucket
* Storing data
* Accessing a bucket
* Deleting a bucket
* Iteration of buckets and keys
* Access controls
* Key metadata
* Managing CORS configurations
* Amazon EC2
* Creating the connection
* Launching new instances
* Stopping and terminating instances
* Checking what instances are running
* Checking health status of instances
* Working with EBS snapshots
* Creating a VPC, subnet, and gateway
* Attaching and detaching an elastic IP and gateway
AWS PrivateLink for Amazon S3
*****************************
This section demonstrates how to configure an S3 client to use an
interface VPC endpoint.
Configuring the client endpoint URL
===================================
When configuring an S3 client to use an interface VPC endpoint it's
important to note that only the resource type specified in the
endpoint can be addressed using that client. An exception is Amazon S3
buckets, which can be addressed using using an access point alias on
the bucket endpoint as the bucket name.
The following example configures an S3 client to access S3 buckets via
an interface VPC endpoint. This client cannot be used to address S3
access points unless you use an access point alias.
import boto3
s3_client = boto3.client(
service_name='s3',
endpoint_url='https://bucket.vpce-abc123-abcdefgh.s3.us-east-1.vpce.amazonaws.com'
)
The following example configures an S3 client to access S3 access
points via an interface VPC endpoint. This client cannot be used to
address S3 buckets.
import boto3
s3_client = boto3.client(
service_name='s3',
endpoint_url='https://accesspoint.vpce-abc123-abcdefgh.s3.us-east-1.vpce.amazonaws.com'
)
The following example configures an S3 Control client to use an
interface VPC endpoint.
import boto3
control_client = boto3.client(
service_name='s3control',
endpoint_url='https://control.vpce-abc123-abcdefgh.s3.us-east-1.vpce.amazonaws.com'
)
The following example accesses an object in an Amazon S3 bucket using
an access point alias.
import boto3
s3_client = boto3.client(
service_name='s3',
endpoint_url = 'https://bucket.vpce-abc123-abcdefgh.s3.us-east-1.vpce.amazonaws.com'
)
s3_client.get_object(Bucket='some-bucket-alias-s3alias', Key='file.txt')
Low-level clients
*****************
Clients provide a low-level interface to AWS whose methods map close
to 1:1 with service APIs. All service operations are supported by
clients. Clients are generated from a JSON service definition file.
Creating clients
================
Clients are created in a similar fashion to resources:
import boto3
# Create a low-level client with the service name
sqs = boto3.client('sqs')
It is also possible to access the low-level client from an existing
resource:
# Create the resource
sqs_resource = boto3.resource('sqs')
# Get the client from the resource
sqs = sqs_resource.meta.client
Service operations
==================
Service operations map to client methods of the same name and provide
access to the same operation parameters via keyword arguments:
# Make a call using the low-level client
response = sqs.send_message(QueueUrl='...', MessageBody='...')
As can be seen above, the method arguments map directly to the
associated SQS API.
Note:
The method names have been snake-cased for better looking Python
code.Parameters **must** be sent as keyword arguments. They will not
work as positional arguments.
Handling responses
==================
Responses are returned as python dictionaries. It is up to you to
traverse or otherwise process the response for the data you need,
keeping in mind that responses may not always include all of the
expected data. In the example below, "response.get('QueueUrls', [])"
is used to ensure that a list is always returned, even when the
response has no key "'QueueUrls'":
# List all your queues
response = sqs.list_queues()
for url in response.get('QueueUrls', []):
print(url)
The "response" in the example above looks something like this:
{
"QueueUrls": [
"http://url1",
"http://url2",
"http://url3"
]
}
Waiters
=======
Waiters use a client's service operations to poll the status of an AWS
resource and suspend execution until the AWS resource reaches the
state that the waiter is polling for or a failure occurs while
polling. Using clients, you can learn the name of each waiter that a
client has access to:
import boto3
s3 = boto3.client('s3')
sqs = boto3.client('sqs')
# List all of the possible waiters for both clients
print("s3 waiters:")
s3.waiter_names
print("sqs waiters:")
sqs.waiter_names
Note if a client does not have any waiters, it will return an empty
list when accessing its "waiter_names" attribute:
s3 waiters:
[u'bucket_exists', u'bucket_not_exists', u'object_exists', u'object_not_exists']
sqs waiters:
[]
Using a client's "get_waiter()" method, you can obtain a specific
waiter from its list of possible waiters:
# Retrieve waiter instance that will wait till a specified
# S3 bucket exists
s3_bucket_exists_waiter = s3.get_waiter('bucket_exists')
Then to actually start waiting, you must call the waiter's "wait()"
method with the method's appropriate parameters passed in:
# Begin waiting for the S3 bucket, amzn-s3-demo-bucket, to exist
s3_bucket_exists_waiter.wait(Bucket='amzn-s3-demo-bucket')
Multithreading or multiprocessing with clients
==============================================
Unlike Resources and Sessions, clients **are** generally *thread-
safe*. There are some caveats, defined below, to be aware of though.
Caveats
-------
**Multi-Processing:** While clients are *thread-safe*, they cannot be
shared across processes due to their networking implementation. Doing
so may lead to incorrect response ordering when calling services.
**Shared Metadata:** Clients expose metadata to the end user through a
few attributes (namely "meta", "exceptions" and "waiter_names"). These
are safe to read but any mutations should not be considered thread-
safe.
**Custom** Botocore Events**:** Botocore (the library Boto3 is built
on) allows advanced users to provide their own custom event hooks
which may interact with boto3’s client. The majority of users will not
need to use these interfaces, but those that do should no longer
consider their clients thread-safe without careful review.
Note:
"boto3.client('')" is an alias for creating a client
with a shared default session. Invoking "boto3.client()" inside of a
concurrent context may result in response ordering issues or
interpreter failures from underlying SSL modules.
General Example
---------------
import boto3.session
from concurrent.futures import ThreadPoolExecutor
def do_s3_task(client, task_definition):
# Put your thread-safe code here
def my_workflow():
# Create a session and use it to make our client
session = boto3.session.Session()
s3_client = session.client('s3')
# Define some work to be done, this can be anything
my_tasks = [ ... ]
# Dispatch work tasks with our s3_client
with ThreadPoolExecutor(max_workers=8) as executor:
futures = [executor.submit(do_s3_task, s3_client, task) for task in my_tasks]
Error handling
**************
Overview
========
Boto3 provides many features to assist in navigating the errors and
exceptions that you might encounter when interacting with AWS
services.
Specifically, this guide provides details on the following:
* How to find what exceptions could be thrown by both Boto3 and AWS
services
* How to catch and handle exceptions thrown by both Boto3 and AWS
services
* How to parse error responses from AWS services
Why catch exceptions from AWS and Boto
--------------------------------------
* *Service limits and quotas* - Your call rate to an AWS service might
be too frequent, or you might have reached a specific AWS service
quota. In either case, without proper error handling you wouldn’t
know or wouldn’t handle them.
* *Parameter validation and checking* - API requirements can change,
especially across API versions. Catching these errors helps to
identify if there’s an issue with the parameters you provide to any
given API call.
* *Proper logging and messaging* - Catching errors and exceptions
means you can log them. This can be instrumental in troubleshooting
any code you write when interacting with AWS services.
Determining what exceptions to catch
====================================
Exceptions that you might encounter when using Boto3 will come from
one of two sources: botocore or the AWS services your client is
interacting with.
Botocore exceptions
-------------------
These exceptions are statically defined within the botocore package, a
dependency of Boto3. The exceptions are related to issues with client-
side behaviors, configurations, or validations. You can generate a
list of the statically defined botocore exceptions using the following
code:
import botocore.exceptions
for key, value in sorted(botocore.exceptions.__dict__.items()):
if isinstance(value, type):
print(key)
Tip:
AliasConflictParameterError
ApiVersionNotFoundError
BaseEndpointResolverError
BotoCoreError
ChecksumError
ClientError
ConfigNotFound
ConfigParseError
ConnectTimeoutError
ConnectionClosedError
ConnectionError
CredentialRetrievalError
DataNotFoundError
EndpointConnectionError
EventStreamError
HTTPClientError
ImminentRemovalWarning
IncompleteReadError
InfiniteLoopConfigError
InvalidConfigError
InvalidDNSNameError
InvalidExpressionError
InvalidMaxRetryAttemptsError
InvalidRetryConfigurationError
InvalidS3AddressingStyleError
InvalidS3UsEast1RegionalEndpointConfigError
InvalidSTSRegionalEndpointsConfigError
MD5UnavailableError
MetadataRetrievalError
MissingParametersError
MissingServiceIdError
NoCredentialsError
NoRegionError
OperationNotPageableError
PaginationError
ParamValidationError
PartialCredentialsError
ProfileNotFound
ProxyConnectionError
RangeError
ReadTimeoutError
RefreshWithMFAUnsupportedError
SSLError
ServiceNotInRegionError
StubAssertionError
StubResponseError
UnStubbedResponseError
UndefinedModelAttributeError
UnknownClientMethodError
UnknownCredentialError
UnknownEndpointError
UnknownKeyError
UnknownParameterError
UnknownServiceError
UnknownServiceStyle
UnknownSignatureVersionError
UnseekableStreamError
UnsupportedS3AccesspointConfigurationError
UnsupportedS3ArnError
UnsupportedSignatureVersionError
UnsupportedTLSVersionWarning
ValidationError
WaiterConfigError
WaiterError
Note:
You can view available descriptions of the botocore static
exceptions here.
AWS service exceptions
----------------------
AWS service exceptions are caught with the underlying botocore
exception, "ClientError". After you catch this exception, you can
parse through the response for specifics around that error, including
the service-specific exception. Exceptions and errors from AWS
services vary widely. You can quickly get a list of an AWS service’s
exceptions using Boto3.
For a complete list of error responses from the services you’re using,
consult the individual service’s AWS documentation, specifically the
error response section of the AWS service’s API reference. These
references also provide context around the exceptions and errors.
Catching exceptions when using a low-level client
=================================================
Catching botocore exceptions
----------------------------
Botocore exceptions are statically defined in the botocore package.
Any Boto3 clients you create will use these same statically defined
exception classes. The most common botocore exception you’ll encounter
is "ClientError". This is a general exception when an error response
is provided by an AWS service to your Boto3 client’s request.
Additional client-side issues with SSL negotiation, client
misconfiguration, or AWS service validation errors will also throw
botocore exceptions. Here’s a generic example of how you might catch
botocore exceptions.
import botocore
import boto3
client = boto3.client('aws_service_name')
try:
client.some_api_call(SomeParam='some_param')
except botocore.exceptions.ClientError as error:
# Put your error handling logic here
raise error
except botocore.exceptions.ParamValidationError as error:
raise ValueError('The parameters you provided are incorrect: {}'.format(error))
Parsing error responses and catching exceptions from AWS services
-----------------------------------------------------------------
Unlike botocore exceptions, AWS service exceptions aren't statically
defined in Boto3. This is due to errors and exceptions from AWS
services varying widely and being subject to change. To properly catch
an exception from an AWS service, you must parse the error response
from the service. The error response provided to your client from the
AWS service follows a common structure and is minimally processed and
not obfuscated by Boto3.
Using Boto3, the error response from an AWS service will look similar
to a success response, except that an "Error" nested dictionary will
appear with the "ResponseMetadata" nested dictionary. Here is an
example of what an error response might look like:
{
'Error': {
'Code': 'SomeServiceException',
'Message': 'Details/context around the exception or error'
},
'ResponseMetadata': {
'RequestId': '1234567890ABCDEF',
'HostId': 'host ID data will appear here as a hash',
'HTTPStatusCode': 400,
'HTTPHeaders': {'header metadata key/values will appear here'},
'RetryAttempts': 0
}
}
Boto3 classifies all AWS service errors and exceptions as
"ClientError" exceptions. When attempting to catch AWS service
exceptions, one way is to catch "ClientError" and then parse the error
response for the AWS service-specific exception.
Using Amazon Kinesis as an example service, you can use Boto3 to catch
the exception "LimitExceededException" and insert your own logging
message when your code experiences request throttling from the AWS
service.
import botocore
import boto3
import logging
# Set up our logger
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
client = boto3.client('kinesis')
try:
logger.info('Calling DescribeStream API on myDataStream')
client.describe_stream(StreamName='myDataStream')
except botocore.exceptions.ClientError as error:
if error.response['Error']['Code'] == 'LimitExceededException':
logger.warn('API call limit exceeded; backing off and retrying...')
else:
raise error
Note:
The Boto3 "standard" retry mode will catch throttling errors and
exceptions, and will back off and retry them for you.
Additionally, you can also access some of the dynamic service-side
exceptions from the client’s exception property. Using the previous
example, you would need to modify only the "except" clause.
except client.exceptions.LimitExceedException as error:
logger.warn('API call limit exceeded; backing off and retrying...')
Note:
Catching exceptions through "ClientError" and parsing for error
codes is still the best way to catch **all** service-side exceptions
and errors.
Catching exceptions when using a resource client
================================================
When using "Resource" classes to interact with certain AWS services,
catching exceptions and errors is a similar experience to using a low-
level client.
Parsing for error responses uses the same exact methodology outlined
in the low-level client section. Catching exceptions through the
client’s "exceptions" property is slightly different, as you’ll need
to access the client’s "meta" property to get to the exceptions.
client.meta.client.exceptions.SomeServiceException
Using Amazon S3 as an example resource service, you can use the
client’s exception property to catch the "BucketAlreadyExists"
exception. And you can still parse the error response to get the
bucket name that's passed in the original request.
import botocore
import boto3
client = boto3.resource('s3')
try:
client.create_bucket(BucketName='amzn-s3-demo-bucket')
except client.meta.client.exceptions.BucketAlreadyExists as err:
print("Bucket {} already exists!".format(err.response['Error']['BucketName']))
raise err
Discerning useful information from error responses
==================================================
As stated previously in this guide, for details and context around
specific AWS service exceptions, see the individual service’s AWS
documentation, specifically the error response section of the AWS
service’s API reference.
Botocore exceptions will have detailed error messaging when those
exceptions are thrown. These error messages provide details and
context around the specific exception thrown. Descriptions of these
exceptions can be viewed here.
Outside of specific error or exception details and messaging, you
might want to extract additional metadata from error responses:
* *Exception class* - You can use this data to build logic around, or
in response to, these errors and exceptions.
* *Error message* - This data can help reason about the cause of the
error and provide more context around the issue. These messages are
subject to change and should not be relied upon in code.
* *Request ID and HTTP status code* - AWS service exceptions might
still be vague or lacking in details. If this occurs, contacting
customer support and providing the AWS service name, error, error
message, and request ID could allow a support engineer to further
look into your issue.
Using a low-level Amazon SQS client, here’s an example of catching a
generic or vague exception from the AWS service, and parsing out
useful metadata from the error response.
import botocore
import boto3
client = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
try:
client.send_message(QueueUrl=queue_url, MessageBody=('some_message'))
except botocore.exceptions.ClientError as err:
if err.response['Error']['Code'] == 'InternalError': # Generic error
# We grab the message, request ID, and HTTP code to give to customer support
print('Error Message: {}'.format(err.response['Error']['Message']))
print('Request ID: {}'.format(err.response['ResponseMetadata']['RequestId']))
print('Http code: {}'.format(err.response['ResponseMetadata']['HTTPStatusCode']))
else:
raise err
AWS Secrets Manager
*******************
This Python example shows you how to retrieve the decrypted secret
value from an AWS Secrets Manager secret. The secret could be created
using either the Secrets Manager console or the CLI/SDK.
The code uses the AWS SDK for Python to retrieve a decrypted secret
value.
For more information about using an Amazon Secrets Manager, see
Tutorial: Storing and Retrieving a Secret in the *AWS Secrets Manager
Developer Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first set up the following:
* Configure your AWS credentials, as described in Quickstart.
* Create a secret with the AWS Secrets Manager, as described in the
AWS Secrets Manager Developer Guide
Retrieve the secret value
=========================
The following example shows how to:
* Retrieve a secret value using get_secret_value.
Example
-------
import boto3
from botocore.exceptions import ClientError
def get_secret():
secret_name = "MySecretName"
region_name = "us-west-2"
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name,
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
print("The requested secret " + secret_name + " was not found")
elif e.response['Error']['Code'] == 'InvalidRequestException':
print("The request was invalid due to:", e)
elif e.response['Error']['Code'] == 'InvalidParameterException':
print("The request had invalid params:", e)
elif e.response['Error']['Code'] == 'DecryptionFailure':
print("The requested secret can't be decrypted using the provided KMS key:", e)
elif e.response['Error']['Code'] == 'InternalServiceError':
print("An error occurred on service side:", e)
else:
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
text_secret_data = get_secret_value_response['SecretString']
else:
binary_secret_data = get_secret_value_response['SecretBinary']
# Your code goes here.
Managing IAM access keys
************************
This Python example shows you how to manage the access keys of your
users.
The scenario
============
Users need their own access keys to make programmatic calls to AWS
from the Amazon Web Services (AWS) SDK for Python. To fill this need,
you can create, modify, view, or rotate access keys (access key IDs
and secret access keys) for IAM users. By default, when you create an
access key, its status is Active, which means the user can use the
access key for API calls.
In this example, Python code is used to manage access keys in IAM. The
code uses the AWS SDK for Python to manage IAM access keys using these
methods of the IAM client class:
* create_access_key.
* paginate(UserName='IAM_USER_NAME').
* get_access_key_last_used.
* update_access_key.
* delete_access_key.
For more information about IAM access keys, see Managing Access Keys
in the *IAM User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Create access keys for a user
=============================
Create a new AWS secret access key and corresponding AWS access key ID
for the specified user. The default status for new keys is "Active".
The example below shows how to:
* Create a new AWS access key using create_access_key.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Create an access key
response = iam.create_access_key(
UserName='IAM_USER_NAME'
)
print(response['AccessKey'])
List a user's access keys
=========================
List information about the access key IDs associated with the
specified IAM user. If there are none, the action returns an empty
list.
If the UserName field is not specified, the UserName is determined
implicitly based on the AWS access key ID used to sign the request.
Because this action works for access keys under the AWS account, you
can use this action to manage root credentials even if the AWS account
has no associated users.
The example below shows how to:
* List a user's access keys using paginate(UserName='IAM_USER_NAME').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# List access keys through the pagination interface.
paginator = iam.get_paginator('list_access_keys')
for response in paginator.paginate(UserName='IAM_USER_NAME'):
print(response)
Get the access key last used
============================
Get information about when the specified access key was last used. The
information includes the date and time of last use, along with the AWS
service and region that were specified in the last request made with
that key.
The example below shows how to:
* Get the access key last used using get_access_key_last_used.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Get last use of access key
response = iam.get_access_key_last_used(
AccessKeyId='ACCESS_KEY_ID'
)
print(response['AccessKeyLastUsed'])
Update access key status
========================
Change the status of the specified access key from Active to Inactive,
or vice versa. This action can be used to disable a user's key as part
of a key rotation work flow.
The example below shows how to:
* Change the status of an access key to "Active" using
update_access_key.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Update access key to be active
iam.update_access_key(
AccessKeyId='ACCESS_KEY_ID',
Status='Active',
UserName='IAM_USER_NAME'
)
Delete an access key
====================
Delete the access key pair associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name
implicitly based on the AWS access key ID signing the request. Because
this action works for access keys under the AWS account, you can use
this action to manage root credentials even if the AWS account has no
associated users.
The example below shows how to:
* Delete an access key using delete_access_key.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Delete access key
iam.delete_access_key(
AccessKeyId='ACCESS_KEY_ID',
UserName='IAM_USER_NAME'
)
Bucket policies
***************
An S3 bucket can have an optional policy that grants access
permissions to other AWS accounts or AWS Identity and Access
Management (IAM) users. Bucket policies are defined using the same
JSON format as a resource-based IAM policy.
Retrieve a bucket policy
========================
Retrieve a bucket's policy by calling the AWS SDK for Python
"get_bucket_policy" method. The method accepts a parameter that
specifies the bucket name.
import boto3
# Retrieve the policy of the specified bucket
s3 = boto3.client('s3')
result = s3.get_bucket_policy(Bucket='amzn-s3-demo-bucket')
print(result['Policy'])
Set a bucket policy
===================
A bucket's policy can be set by calling the "put_bucket_policy"
method.
The policy is defined in the same JSON format as an IAM policy. The
policy defined in the example below enables any user to retrieve any
object stored in the bucket identified by the "bucket_name" variable.
import json
# Create a bucket policy
bucket_name = 'amzn-s3-demo-bucket'
bucket_policy = {
'Version': '2012-10-17',
'Statement': [{
'Sid': 'AddPerm',
'Effect': 'Allow',
'Principal': '*',
'Action': ['s3:GetObject'],
'Resource': f'arn:aws:s3:::{bucket_name}/*'
}]
}
# Convert the policy from JSON dict to string
bucket_policy = json.dumps(bucket_policy)
# Set the new policy
s3 = boto3.client('s3')
s3.put_bucket_policy(Bucket=bucket_name, Policy=bucket_policy)
Delete a bucket policy
======================
A bucket's policy can be deleted by calling the "delete_bucket_policy"
method.
# Delete a bucket's policy
s3 = boto3.client('s3')
s3.delete_bucket_policy(Bucket='BUCKET_NAME')
Enabling long polling in Amazon SQS
***********************************
This Python example shows you how to enable long polling in Amazon SQS
in one of these ways:
* For a newly created queue
* For an existing queue
* Upon receipt of a message
The scenario
============
Long polling reduces the number of empty responses by allowing Amazon
SQS to wait a specified time for a message to become available in the
queue before sending a response. Also, long polling eliminates false
empty responses by querying all of the servers instead of a sampling
of servers. To enable long polling, you must specify a non-zero wait
time for received messages. You can do this by setting the
"ReceiveMessageWaitTimeSeconds" parameter of a queue or by setting the
"WaitTimeSeconds" parameter on a message when it is received.
In these examples, the AWS SDK for Python is used to enable long
polling using the following Amazon SQS methods.
* create_queue
* set_queue_attributes.
* receive_message.
For more information, see Amazon SQS Long Polling in the *Amazon
Simple Queue Service Developer Guide*.
Enable long polling when creating a queue
=========================================
The example below shows how to:
* Create a queue and enable long polling using create_queue.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# Create a SQS queue with long polling enabled
response = sqs.create_queue(
QueueName='SQS_QUEUE_NAME',
Attributes={'ReceiveMessageWaitTimeSeconds': '20'}
)
print(response['QueueUrl'])
Enable long polling on an existing queue
========================================
The example below shows how to:
* Enable long polling on an existing queue using set_queue_attributes.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
# Enable long polling on an existing SQS queue
sqs.set_queue_attributes(
QueueUrl=queue_url,
Attributes={'ReceiveMessageWaitTimeSeconds': '20'}
)
Enable long polling on message receipt
======================================
The example below shows how to:
* Enable long polling for a message on an SQS queue using
receive_message.
Example
-------
import boto3
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'SQS_QUEUE_URL'
# Long poll for message on provided SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
WaitTimeSeconds=20
)
print(response)
Managing IAM users
******************
This Python example shows you how to create a user, list users, update
a user name and delete a user.
The scenario
============
In this example Python code is used to create and manage users in IAM.
The code uses the Amazon Web Services (AWS) SDK for Python to manage
users using these methods of the IAM client class:
* create_user
* get_paginator('list_users').
* update_user.
* delete_user.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
For more information about IAM users, see IAM Users in the *IAM User
Guide*.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Create a user
=============
Create a new IAM user for your AWS account.
For information about limitations on the number of IAM users you can
create, see Limitations on IAM Entities in the *IAM User Guide*.
The example below shows how to:
* Create a new IAM user using create_user.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Create user
response = iam.create_user(
UserName='IAM_USER_NAME'
)
print(response)
List users in your account
==========================
List the IAM users.
The example below shows how to:
* List the IAM users using get_paginator('list_users').
For more information about paginators see, Paginators
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# List users with the pagination interface
paginator = iam.get_paginator('list_users')
for response in paginator.paginate():
print(response)
Update a user's name
====================
Update the name and/or the path of the specified IAM user.
To change a user's name or path, you must use the AWS CLI, Tools for
Windows PowerShell, or AWS API. There is no option in the console to
rename a user. For information about the permissions that you need in
order to rename a user, see Delegating Permissions to Administer IAM
Users, Groups, and Credentials in the *IAM User Guide*.
The example below shows how to:
* Update an IAM user name using update_user.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Update a user name
iam.update_user(
UserName='IAM_USER_NAME',
NewUserName='NEW_IAM_USER_NAME'
)
Delete a user
=============
Delete the specified IAM user. The user must not belong to any groups
or have any access keys, signing certificates, or attached policies.
The example below shows how to:
* Delete an IAM user name using delete_user.
Example
-------
import boto3
# Create IAM client
iam = boto3.client('iam')
# Delete a user
iam.delete_user(
UserName='IAM_USER_NAME'
)
Sending events to Amazon CloudWatch Events
******************************************
This Python example shows you how to:
* Create and update a rule used to trigger an event
* Define one or more targets to respond to an event
* Send events that are matched to targets for handling
The scenario
============
CloudWatch Events delivers a near real-time stream of system events
that describe changes in Amazon Web Services (AWS) resources to any of
various targets. Using simple rules, you can match events and route
them to one or more target functions or streams.
In this example, Python code is used to send events to CloudWatch
Events. The code uses the AWS SDK for Python to manage instances using
these methods of the CloudWatchEvents client class:
* put_rule.
* put_targets.
* put_events.
For more information about CloudWatch Events, see Adding Events with
PutEvents in the *Amazon CloudWatch Events User Guide*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
* Configure your AWS credentials, as described in Quickstart.
* Create a Lambda function using the **hello-world** blueprint to
serve as the target for events. To learn how, see Step 1: Create an
AWS Lambda function in the *Amazon CloudWatch Events User Guide*.
* Create an IAM role whose policy grants permission to CloudWatch
Events and that includes "events.amazonaws.com" as a trusted entity.
For more information about creating an IAM role, see Creating a Role
to Delegate Permissions to an AWS Service in the *IAM User Guide*.
Use the following role policy when creating the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchEventsFullAccess",
"Effect": "Allow",
"Action": "events:*",
"Resource": "*"
},
{
"Sid": "IAMPassRoleForCloudWatchEvents",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::*:role/AWS_Events_Invoke_Targets"
}
]
}
Use the following trust relationship when creating the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create a scheduled rule
=======================
Create or update the specified rule. Rules are enabled by default, or
based on value of the state. You can disable a rule using DisableRule.
The example below shows how to:
* Create a CloudWatch Events rule using put_rule.
Example
-------
import boto3
# Create CloudWatchEvents client
cloudwatch_events = boto3.client('events')
# Put an event rule
response = cloudwatch_events.put_rule(
Name='DEMO_EVENT',
RoleArn='IAM_ROLE_ARN',
ScheduleExpression='rate(5 minutes)',
State='ENABLED'
)
print(response['RuleArn'])
Add an AWS Lambda function target
=================================
Add the specified targets to the specified rule, or update the targets
if they are already associated with the rule.
The example below shows how to:
* Add a target to a rule using put_targets.
Example
-------
import boto3
# Create CloudWatchEvents client
cloudwatch_events = boto3.client('events')
# Put target for rule
response = cloudwatch_events.put_targets(
Rule='DEMO_EVENT',
Targets=[
{
'Arn': 'LAMBDA_FUNCTION_ARN',
'Id': 'myCloudWatchEventsTarget',
}
]
)
print(response)
Send events
===========
Send custom events to Amazon CloudWatch Events so that they can be
matched to rules.
The example below shows how to:
* Send a custom event to CloudWatch Events using put_events.
Example
-------
import json
import boto3
# Create CloudWatchEvents client
cloudwatch_events = boto3.client('events')
# Put an event
response = cloudwatch_events.put_events(
Entries=[
{
'Detail': json.dumps({'key1': 'value1', 'key2': 'value2'}),
'DetailType': 'appRequestSubmitted',
'Resources': [
'RESOURCE_ARN',
],
'Source': 'com.company.myapp'
}
]
)
print(response['Entries'])
Uploading files
***************
The AWS SDK for Python provides a pair of methods to upload a file to
an S3 bucket.
The "upload_file" method accepts a file name, a bucket name, and an
object name. The method handles large files by splitting them into
smaller chunks and uploading each chunk in parallel.
import logging
import boto3
from botocore.exceptions import ClientError
import os
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = os.path.basename(file_name)
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
except ClientError as e:
logging.error(e)
return False
return True
The "upload_fileobj" method accepts a readable file-like object. The
file object must be opened in binary mode, not text mode.
s3 = boto3.client('s3')
with open("FILE_NAME", "rb") as f:
s3.upload_fileobj(f, "amzn-s3-demo-bucket", "OBJECT_NAME")
The "upload_file" and "upload_fileobj" methods are provided by the S3
"Client", "Bucket", and "Object" classes. The method functionality
provided by each class is identical. No benefits are gained by calling
one class's method over another's. Use whichever class is most
convenient.
The ExtraArgs parameter
=======================
Both "upload_file" and "upload_fileobj" accept an optional "ExtraArgs"
parameter that can be used for various purposes. The list of valid
"ExtraArgs" settings is specified in the "ALLOWED_UPLOAD_ARGS"
attribute of the "S3Transfer" object at
"boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS".
The following "ExtraArgs" setting specifies metadata to attach to the
S3 object.
s3.upload_file(
'FILE_NAME', 'amzn-s3-demo-bucket', 'OBJECT_NAME',
ExtraArgs={'Metadata': {'mykey': 'myvalue'}}
)
The following "ExtraArgs" setting assigns the canned ACL (access
control list) value 'public-read' to the S3 object.
s3.upload_file(
'FILE_NAME', 'amzn-s3-demo-bucket', 'OBJECT_NAME',
ExtraArgs={'ACL': 'public-read'}
)
The "ExtraArgs" parameter can also be used to set custom or multiple
ACLs.
s3.upload_file(
'FILE_NAME', 'amzn-s3-demo-bucket', 'OBJECT_NAME',
ExtraArgs={
'GrantRead': 'uri="http://acs.amazonaws.com/groups/global/AllUsers"',
'GrantFullControl': 'id="01234567890abcdefg"',
}
)
The Callback parameter
======================
Both "upload_file" and "upload_fileobj" accept an optional "Callback"
parameter. The parameter references a class that the Python SDK
invokes intermittently during the transfer operation.
Invoking a Python class executes the class's "__call__" method. For
each invocation, the class is passed the number of bytes transferred
up to that point. This information can be used to implement a progress
monitor.
The following "Callback" setting instructs the Python SDK to create an
instance of the "ProgressPercentage" class. During the upload, the
instance's "__call__" method will be invoked intermittently.
s3.upload_file(
'FILE_NAME', 'amzn-s3-demo-bucket', 'OBJECT_NAME',
Callback=ProgressPercentage('FILE_NAME')
)
An example implementation of the "ProcessPercentage" class is shown
below.
import os
import sys
import threading
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify, assume this is hooked up to a single filename
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
Retries
*******
Overview
========
Your AWS client might see calls to AWS services fail due to unexpected
issues on the client side. Or calls might fail due to rate limiting
from the AWS service you're attempting to call. In either case, these
kinds of failures often don’t require special handling and the call
should be made again, often after a brief waiting period. Boto3
provides many features to assist in retrying client calls to AWS
services when these kinds of errors or exceptions are experienced.
This guide provides you with details on the following:
* How to find the available retry modes and the differences between
each mode
* How to configure your client to use each retry mode and other retry
configurations
* How to validate if your client performs a retry attempt
Available retry modes
=====================
Legacy retry mode
-----------------
Legacy mode is the default mode used by any Boto3 client you create.
As its name implies, "legacy mode" uses an older (v1) retry handler
that has limited functionality.
**Legacy mode’s functionality includes:**
* A default value of 5 for maximum attempts (including the initial
request). See Available configuration options for more information
on overwriting this value.
* Retry attempts for a limited number of errors/exceptions:
# General socket/connection errors
ConnectionError
ConnectionClosedError
ReadTimeoutError
EndpointConnectionError
# Service-side throttling/limit errors and exceptions
Throttling
ThrottlingException
ThrottledException
RequestThrottledException
ProvisionedThroughputExceededException
* Retry attempts on several HTTP status codes, including 429, 500,
502, 503, 504, and 509.
* Any retry attempt will include an exponential backoff by a base
factor of 2.
Note:
For more information about additional service-specific retry
policies, see the following botocore references in GitHub.
Standard retry mode
-------------------
Standard mode is a retry mode that was introduced with the updated
retry handler (v2). This mode is a standardization of retry logic and
behavior that is consistent with other AWS SDKs. In addition to this
standardization, this mode also extends the functionality of retries
over that found in legacy mode.
**Standard mode’s functionality includes:**
* A default value of 3 for maximum attempts (including the initial
request). See Available configuration options for more information
on overwriting this value.
* Retry attempts for an expanded list of errors/exceptions:
# Transient errors/exceptions
RequestTimeout
RequestTimeoutException
PriorRequestNotComplete
ConnectionError
HTTPClientError
# Service-side throttling/limit errors and exceptions
Throttling
ThrottlingException
ThrottledException
RequestThrottledException
TooManyRequestsException
ProvisionedThroughputExceededException
TransactionInProgressException
RequestLimitExceeded
BandwidthLimitExceeded
LimitExceededException
RequestThrottled
SlowDown
EC2ThrottledException
* Retry attempts on nondescriptive, transient error codes.
Specifically, these HTTP status codes: 500, 502, 503, 504.
* Any retry attempt will include an exponential backoff by a base
factor of 2 for a maximum backoff time of 20 seconds.
Adaptive retry mode
-------------------
Adaptive retry mode is an experimental retry mode that includes all
the features of standard mode. In addition to the standard mode
features, adaptive mode also introduces client-side rate limiting
through the use of a token bucket and rate-limit variables that are
dynamically updated with each retry attempt. This mode offers
flexibility in client-side retries that adapts to the error/exception
state response from an AWS service.
With each new retry attempt, adaptive mode modifies the rate-limit
variables based on the error, exception, or HTTP status code presented
in the response from the AWS service. These rate-limit variables are
then used to calculate a new call rate for the client. Each
exception/error or non-success HTTP response (provided in the list
above) from an AWS service updates the rate-limit variables as retries
occur until success is reached, the token bucket is exhausted, or the
configured maximum attempts value is reached.
Note:
Adaptive mode is an experimental mode and is subject to change, both
in features and behavior.
Configuring a retry mode
========================
Boto3 includes a variety of both retry configurations as well as
configuration methods to consider when creating your client object.
Available configuration options
-------------------------------
In Boto3, users can customize retry configurations:
* "retry_mode" - This tells Boto3 which retry mode to use. As
described previously, there are three retry modes available: legacy
(default), standard, and adaptive.
* "max_attempts" - This provides Boto3's retry handler with a value of
maximum attempts. **Important**: The behavior differs depending on
how it's configured:
* When set in your AWS config file or using the "AWS_MAX_ATTEMPTS"
environment variable: "max_attempts" includes the initial request
(total requests)
* When set in a "Config" object: "max_attempts" excludes the initial
request (retries only)
**Examples:**
* AWS config file with "max_attempts = 3": 1 initial request + 2
retries = 3 total attempts
* Environment variable "AWS_MAX_ATTEMPTS=3": 1 initial request + 2
retries = 3 total attempts
* Config object with "max_attempts: 3": 1 initial request + 3
retries = 4 total attempts
* "total_max_attempts" - Available only in "Config" objects, this
always represents total requests including the initial call. This
parameter was introduced to provide consistent behavior with the
"max_attempts" setting used in AWS config files and environment
variables. Note that "total_max_attempts" is not supported as an
environment variable or in AWS config files.
For consistency, consider using "total_max_attempts" in "Config"
objects instead of "max_attempts".
Defining a retry configuration in your AWS configuration file
-------------------------------------------------------------
This first way to define your retry configuration is to update your
global AWS configuration file. The default location for your AWS
config file is "~/.aws/config". Here’s an example of an AWS config
file with the retry configuration options used:
[myConfigProfile]
region = us-east-1
max_attempts = 10
retry_mode = standard
Any Boto3 script or code that uses your AWS config file inherits these
configurations when using your profile, unless otherwise explicitly
overwritten by a "Config" object when instantiating your client object
at runtime. If no configuration options are set, the default retry
mode value is "legacy", and the default "max_attempts" value is 5
(total attempts including initial request).
Defining a retry configuration in a Config object for your Boto3 client
-----------------------------------------------------------------------
The second way to define your retry configuration is to use botocore
to enable more flexibility for you to specify your retry configuration
using a "Config" object that you can pass to your client at runtime.
This method is useful if you don't want to configure retry behavior
globally with your AWS config file
Additionally, if your AWS configuration file is configured with retry
behavior, but you want to override those global settings, you can use
the "Config" object to override an individual client object at
runtime.
As shown in the following example, the "Config" object takes a
"retries" dictionary where you can supply configuration options such
as "total_max_attempts" and "mode", and the values for each.
config = Config(
retries = {
'total_max_attempts': 10,
'mode': 'standard'
}
)
Note:
The AWS configuration file uses "retry_mode" and the "Config" object
uses "mode". Although named differently, they both refer to the same
retry configuration whose options are legacy (default), standard,
and adaptive.
The following is an example of instantiating a "Config" object and
passing it into an Amazon EC2 client to use at runtime.
import boto3
from botocore.config import Config
config = Config(
retries = {
'total_max_attempts': 10,
'mode': 'standard'
}
)
ec2 = boto3.client('ec2', config=config)
Note:
As mentioned previously, if no configuration options are set, the
default mode is "legacy" and the default "total_max_attempts" is 5
(total attempts including initial request).
Validating retry attempts
=========================
To ensure that your retry configuration is correct and working
properly, there are a number of ways you can validate that your
client's retries are occurring.
Checking retry attempts in your client logs
-------------------------------------------
If you enable Boto3’s logging, you can validate and check your
client’s retry attempts in your client’s logs. Notice, however, that
you need to enable "DEBUG" mode in your logger to see any retry
attempts. The client log entries for retry attempts will appear
differently, depending on which retry mode you’ve configured.
**If legacy mode is enabled:**
Retry messages are generated by "botocore.retryhandler". You’ll see
one of three messages:
* *No retry needed*
* *Retry needed, action of: *
* *Reached the maximum number of retry attempts: *
**If standard or adaptive mode is enabled:**
Retry messages are generated by "botocore.retries.standard". You’ll
see one of three messages:
* *Not retrying request*
* *Retry needed, retrying request after delay of: *
* *Retry needed but retry quota reached, not retrying request*
Checking retry attempts in an AWS service response
--------------------------------------------------
You can check the number of retry attempts your client has made by
parsing the response botocore provides when making a call to an AWS
service API. Responses are handled by an underlying botocore module,
and formatted into a dictionary that's part of the JSON response
object. You can access the number of retry attempts your client has
taken by calling the "RetryAttempts" key in the "ResponseMetaData"
dictionary:
'ResponseMetadata': {
'RequestId': '1234567890ABCDEF',
'HostId': 'host ID data will appear here as a hash',
'HTTPStatusCode': 400,
'HTTPHeaders': {'header metadata key/values will appear here'},
'RetryAttempts': 4
}
A Sample Tutorial
*****************
This tutorial will show you how to use Boto3 with an AWS service. In
this sample tutorial, you will learn how to use Boto3 with Amazon
Simple Queue Service (SQS)
SQS
===
SQS allows you to queue and then process messages. This tutorial
covers how to create a new queue, get and use an existing queue, push
new messages onto the queue, and process messages from the queue by
using Resources and Collections.
Creating a queue
================
Queues are created with a name. You may also optionally set queue
attributes, such as the number of seconds to wait before an item may
be processed. The examples below will use the queue name "test".
Before creating a queue, you must first get the SQS service resource:
# Get the service resource
sqs = boto3.resource('sqs')
# Create the queue. This returns an SQS.Queue instance
queue = sqs.create_queue(QueueName='test', Attributes={'DelaySeconds': '5'})
# You can now access identifiers and attributes
print(queue.url)
print(queue.attributes.get('DelaySeconds'))
Reference: "SQS.ServiceResource.create_queue()"
Warning:
The code above may throw an exception if you already have a queue
named "test".
Using an existing queue
=======================
It is possible to look up a queue by its name. If the queue does not
exist, then an exception will be thrown:
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue. This returns an SQS.Queue instance
queue = sqs.get_queue_by_name(QueueName='test')
# You can now access identifiers and attributes
print(queue.url)
print(queue.attributes.get('DelaySeconds'))
It is also possible to list all of your existing queues:
# Print out each queue name, which is part of its ARN
for queue in sqs.queues.all():
print(queue.url)
Note:
To get the name from a queue, you must use its ARN, which is
available in the queue's "attributes" attribute. Using
"queue.attributes['QueueArn'].split(':')[-1]" will return its name.
Reference: "SQS.ServiceResource.get_queue_by_name()",
"SQS.ServiceResource.queues"
Sending messages
================
Sending a message adds it to the end of the queue:
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='test')
# Create a new message
response = queue.send_message(MessageBody='world')
# The response is NOT a resource, but gives you a message ID and MD5
print(response.get('MessageId'))
print(response.get('MD5OfMessageBody'))
You can also create messages with custom attributes:
queue.send_message(MessageBody='boto3', MessageAttributes={
'Author': {
'StringValue': 'Daniel',
'DataType': 'String'
}
})
Messages can also be sent in batches. For example, sending the two
messages described above in a single request would look like the
following:
response = queue.send_messages(Entries=[
{
'Id': '1',
'MessageBody': 'world'
},
{
'Id': '2',
'MessageBody': 'boto3',
'MessageAttributes': {
'Author': {
'StringValue': 'Daniel',
'DataType': 'String'
}
}
}
])
# Print out any failures
print(response.get('Failed'))
In this case, the response contains lists of "Successful" and "Failed"
messages, so you can retry failures if needed.
Reference: "SQS.Queue.send_message()", "SQS.Queue.send_messages()"
Processing messages
===================
Messages are processed in batches:
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='test')
# Process messages by printing out body and optional author name
for message in queue.receive_messages(MessageAttributeNames=['Author']):
# Get the custom author message attribute if it was set
author_text = ''
if message.message_attributes is not None:
author_name = message.message_attributes.get('Author').get('StringValue')
if author_name:
author_text = ' ({0})'.format(author_name)
# Print out the body and author (if set)
print('Hello, {0}!{1}'.format(message.body, author_text))
# Let the queue know that the message is processed
message.delete()
Given *only* the messages that were sent in a batch with
"SQS.Queue.send_messages()" in the previous section, the above code
will print out:
Hello, world!
Hello, boto3! (Daniel)
Reference: "SQS.Queue.receive_messages()", "SQS.Message.delete()"
Cloud9
******
You can use AWS Cloud9 with Boto3 to write, run, and debug your Python
code using just a browser. AWS Cloud9 provides an integrated
development environment (IDE) that includes tools such as a code
editor, debugger, and terminal. Because the AWS Cloud9 IDE is cloud
based, you can work on your Python projects from your office, home, or
anywhere using an internet-connected machine. For general information
about AWS Cloud9, see the AWS Cloud9 User Guide.
Prerequisites
=============
You must already have an AWS account. If you don't have one, do this
to create it:
1. Go to https://aws.amazon.com.
2. Choose **Sign In to the Console**.
3. Choose **Create a new AWS account**.
4. Follow the on-screen instructions to finish creating the account.
Step 1: Set up your AWS account
===============================
Start to use AWS Cloud9 by signing in to the AWS Cloud9 console as an
AWS Identity and Access Management (IAM) entity (for example, an IAM
user) in your AWS account who has access permissions for AWS Cloud9.
To set up an IAM entity in your AWS account to access AWS Cloud9, and
to sign in to the AWS Cloud9 console, see Team Setup in the *AWS
Cloud9 User Guide*.
Step 2: Create an environment
=============================
After you sign in to the AWS Cloud9 console, use the console to create
an AWS Cloud9 development environment. (A *development environment* is
a place where you store your project's files and where you run the
tools to develop your apps.) After you create the environment, AWS
Cloud9 automatically opens the IDE for that environment.
To create an AWS Cloud9 development environment, see Creating an
Environment in the *AWS Cloud9 User Guide*.
Step 3: Set up credentials
==========================
To call AWS services from Python code in your environment, you must
provide a set of AWS authentication credentials along with each call
that your code makes. If you created an AWS Cloud9 EC2 development
environment in the previous step, then AWS Cloud9 automatically set up
these credentials in your environment, and you can skip ahead to the
next step.
If, however, you created an AWS Cloud9 SSH development environment,
you must manually set up these credentials in your environment. To set
up these credentials, see Call AWS Services from an Environment in the
*AWS Cloud9 User Guide*.
Step 4: Install Boto3
=====================
After AWS Cloud9 opens the IDE for your development environment, use
the IDE to set up Boto3. To do this, use the terminal in the IDE to
run this command:
sudo pip install boto3
If the terminal isn't already open in the IDE, open it. To do this, on
the menu bar in the IDE, choose **Window, New Terminal**.
You can also install a specific version:
sudo pip install boto3==1.0.0
Note:
The latest development version can always be found on GitHub.
Step 5: Download example code
=============================
Use the terminal that you opened in the previous step, download
example code for Boto3 into your AWS Cloud9 development environment.
To do this, use the terminal in the IDE to run this command:
git clone https://github.com/awsdocs/aws-doc-sdk-examples.git
This command downloads a copy of many of the code examples used across
the official AWS SDK documentation into your environment's root
directory.
To find the code examples for Boto3, use the **Environment** window to
open the "your-environment-name/aws-doc-sdk-
examples/python/example_code" directory, where "your-environment-name"
is the name of your development environment.
To learn how to work with these and other code examples, see Code
Examples.
Step 6: Run and debug code
==========================
To run your Python code in your AWS Cloud9 development environment,
see Run Your Code in the *AWS Cloud9 User Guide*.
To debug your Python code, see Debug Your Code in the *AWS Cloud9 User
Guide*.
Next steps
==========
Explore these resources to learn more about AWS Cloud9:
* Experiment with the Python Sample in the *AWS Cloud9 User Guide*.
* Learn how to use the AWS Cloud9 IDE by completing the IDE Tutorial
in the *AWS Cloud9 User Guide*.
AWS Key Management Service (AWS KMS) examples
*********************************************
Encrypting valuable data is a common security practice. The encryption
process typically uses one or more keys, sometimes referred to as data
keys and master keys. A data key is used to encrypt the data. A master
key manages one or more data keys. To prevent the data from being
decrypted by unauthorized users, both keys must be protected, often by
being encrypted themselves. The AWS Key Management Service (AWS KMS)
can assist in this key management.
**Examples**
* Encrypt and decrypt a file
Amazon SES examples
*******************
Amazon Simple Email Service (SES) is an email platform that provides
an easy, cost-effective way for you to send and receive email using
your own email addresses and domains. For more information about
Amazon SES, see the Amazon SES documentation.
**Examples**
* Verifying email addresses
* Working with email templates
* Managing email filters
* Using email rules
Describe Amazon EC2 Regions and Availability Zones
**************************************************
Amazon EC2 is hosted in multiple locations worldwide. These locations
are composed of regions and Availability Zones. Each region is a
separate geographic area. Each region has multiple, isolated locations
known as Availability Zones. Amazon EC2 provides the ability to place
instances and data in multiple locations.
The scenario
============
In this example, Python code is used to get details about regions and
Availability Zones. The code uses the AWS SDK for Python to get the
data by using these methods of the EC2 client class:
* describe_regions.
* describe_availability_zones.
For more information about regions and Availability Zones, see Regions
and Availability Zones in the *Amazon EC2 User Guide for Linux
Instances* or Regions and Availability Zones in the *Amazon EC2 User
Guide for Windows Instances*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe Regions and Availability Zones
=======================================
* Describe one or more Regions that are currently available to you.
* Describe one or more of the Availability Zones that are available to
you. The results include zones only for the region you're currently
using. If there is an event impacting an Availability Zone, you can
use this request to view the state and any provided message for that
Availability Zone.
The example below shows how to:
* Describe Regions using describe_regions.
* Describe Availability Zones using describe_availability_zones.
Example
-------
import boto3
ec2 = boto3.client('ec2')
# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])
# Retrieves availability zones only for region of the ec2 object
response = ec2.describe_availability_zones()
print('Availability Zones:', response['AvailabilityZones'])
Multi-Region Access Points
**************************
Amazon S3 Multi-Region Access Points (MRAPs) provide global endpoints
that applications can use to fulfill requests from Amazon S3 buckets
located in multiple AWS Regions. You can use them to build multi-
Region applications with the same architecture used by single-Region
applications, then run those applications anywhere in the world.
To learn more about using Multi-Region Access Points in your Boto3
application, see Using Multi-Region Access Points with supported API
operations, which offers a thorough explanation as well as examples
for the CLI and for the SDK for Python.
Migrating to Python 3
*********************
Python 2.7 was deprecated by the Python Software Foundation on January
1, 2020 following a multi-year process of phasing it out. Because of
this, AWS has deprecated support for Python 2.7, which means that
releases of Boto3 issued after the deprecation date will no longer
work on Python 2.7.
This affects both modules that comprise the AWS SDK for Python:
Botocore (the underlying low-level module) and Boto3 (which implements
the API functionality and higher-level features).
Timeline
========
Going forward, all projects using Boto3 need to transition to a
supported version of Python 3. Boto3 and Botocore ended support for
Python 2.7 on July 15, 2021.
Updating your project to use Python 3
=====================================
Before you begin to update your project and environment, make sure
you’ve installed or updated to a supported version of Python as
described in upgrade to Python 3. You can get Python from the PSF web
site or using your local package manager.
After you have installed Python 3, you can upgrade the SDK. To do so,
you need to update the Boto3 Python package. You can do this globally
or within your virtual environment if you use one for your project.
To update the AWS SDK for Python
--------------------------------
1. Uninstall the currently installed copies of Boto3 and Botocore:
$ python -m pip uninstall boto3 botocore
2. Install the new version of Boto3. This will also install Botocore,
which it requires:
$ python3 -m pip install boto3
3. (Optional) Verify that the SDK is using the correct version of
Python:
$ python3 -c "import boto3, sys; print(f'{sys.version} \nboto3: {boto3.__version__}')"
3.x.y (default, Jan 7 2021, 17:11:21)
[GCC 7.3.1 20180712 (Red Hat 7.3.1-11)]
boto3: 1.16.15
If you're unable to upgrade to Python 3
=======================================
It may be that you're unable to upgrade to Python 3, for example if
you have a large project that's heavily dependent on syntax or
features that no longer work as desired in Python 3. It's also
possible that you need to postpone the Python transition while you
finish updates to your code.
Under these circumstances, you should plan on pinning your project's
install of Boto3 to the last release that supports the Python version
you use, then not updating Boto3 further. You can then keep using an
existing installation of Boto3 on Python 2, even after its deprecation
date, with the understanding that deprecated versions of Boto3 will
not receive further feature or security updates.
pip-based installations
-----------------------
If you installed Boto3 using **pip** 10.0 or later, you'll
automatically stop receiving Boto3 updates after the last Python 2
compatible version of the SDK is installed. If you're using an older
version of **pip**, you need to pin your Boto3 install to no later
than version 1.17.
Other installation methods
--------------------------
If you installed Boto3 and Botocore from source or by any other
method, be sure to download and install a version released prior to
the Python 2.7 deprecation date.
Amazon EC2
**********
Boto 2.x contains a number of customizations to make working with
Amazon EC2 instances, storage and networks easy. Boto3 exposes these
same objects through its resources interface in a unified and
consistent way.
Creating the connection
=======================
Boto3 has both low-level clients and higher-level resources. For
Amazon EC2, the higher-level resources are the most similar to Boto
2.x's "ec2" and "vpc" modules:
# Boto 2.x
import boto
ec2_connection = boto.connect_ec2()
vpc_connection = boto.connect_vpc()
# Boto3
import boto3
ec2 = boto3.resource('ec2')
Launching new instances
=======================
Launching new instances requires an image ID and the number of
instances to launch. It can also take several optional parameters,
such as the instance type and security group:
# Boto 2.x
ec2_connection.run_instances('')
# Boto3
ec2.create_instances(ImageId='', MinCount=1, MaxCount=5)
Stopping and terminating instances
==================================
Stopping and terminating multiple instances given a list of instance
IDs uses Boto3 collection filtering:
ids = ['instance-id-1', 'instance-id-2', ...]
# Boto 2.x
ec2_connection.stop_instances(instance_ids=ids)
ec2_connection.terminate_instances(instance_ids=ids)
# Boto3
ec2.instances.filter(InstanceIds=ids).stop()
ec2.instances.filter(InstanceIds=ids).terminate()
Checking what instances are running
===================================
Boto3 collections come in handy when listing all your running
instances as well. Every collection exposes a "filter" method that
allows you to pass additional parameters to the underlying service API
operation. The EC2 instances collection takes a parameter called
"Filters" which is a list of names and values, for example:
# Boto 2.x
reservations = ec2_connection.get_all_reservations(
filters={'instance-state-name': 'running'})
for reservation in reservations:
for instance in reservation.instances:
print(instance.instance_id, instance.instance_type)
# Boto3
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
print(instance.id, instance.instance_type)
Checking health status of instances
===================================
It is possible to get scheduled maintenance information for your
running instances. At the time of this writing Boto3 does not have a
status resource, so you must drop down to the low-level client via
"ec2.meta.client":
# Boto 2.x
for status in ec2_connection.get_all_instance_statuses():
print(status)
# Boto3
for status in ec2.meta.client.describe_instance_status()['InstanceStatuses']:
print(status)
Working with EBS snapshots
==========================
Snapshots provide a way to create a copy of an EBS volume, as well as
make new volumes from the snapshot which can be attached to an
instance:
# Boto 2.x
snapshot = ec2_connection.create_snapshot('volume-id', 'Description')
volume = snapshot.create_volume('us-west-2')
ec2_connection.attach_volume(volume.id, 'instance-id', '/dev/sdy')
ec2_connection.delete_snapshot(snapshot.id)
# Boto3
snapshot = ec2.create_snapshot(VolumeId='volume-id', Description='description')
volume = ec2.create_volume(SnapshotId=snapshot.id, AvailabilityZone='us-west-2a')
ec2.Instance('instance-id').attach_volume(VolumeId=volume.id, Device='/dev/sdy')
snapshot.delete()
Creating a VPC, subnet, and gateway
===================================
Creating VPC resources in Boto3 is very similar to Boto 2.x:
# Boto 2.x
vpc = vpc_connection.create_vpc('10.0.0.0/24')
subnet = vpc_connection.create_subnet(vpc.id, '10.0.0.0/25')
gateway = vpc_connection.create_internet_gateway()
# Boto3
vpc = ec2.create_vpc(CidrBlock='10.0.0.0/24')
subnet = vpc.create_subnet(CidrBlock='10.0.0.0/25')
gateway = ec2.create_internet_gateway()
Attaching and detaching an elastic IP and gateway
=================================================
Elastic IPs and gateways provide a way for instances inside of a VPC
to communicate with the outside world:
# Boto 2.x
ec2_connection.attach_internet_gateway(gateway.id, vpc.id)
ec2_connection.detach_internet_gateway(gateway.id, vpc.id)
from boto.ec2.address import Address
address = Address()
address.allocation_id = 'eipalloc-35cf685d'
address.associate('i-71b2f60b')
address.disassociate()
# Boto3
gateway.attach_to_vpc(VpcId=vpc.id)
gateway.detach_from_vpc(VpcId=vpc.id)
address = ec2.VpcAddress('eipalloc-35cf685d')
address.associate('i-71b2f60b')
address.association.delete()
Quickstart
**********
This guide details the steps needed to install or update the AWS SDK
for Python.
The SDK is composed of two key Python packages: Botocore (the library
providing the low-level functionality shared between the Python SDK
and the AWS CLI) and Boto3 (the package implementing the Python SDK
itself).
Note:
Documentation and developers tend to refer to the AWS SDK for Python
as "Boto3," and this documentation often does so as well.
Installation
============
To use Boto3, you first need to install it and its dependencies.
Install or update Python
------------------------
Before installing Boto3, install Python 3.9 or later; support for
Python 3.8 and earlier is deprecated. After the deprecation date
listed for each Python version, new releases of Boto3 will not include
support for that version of Python. For details, including the
deprecation schedule and how to update your project to use Python 3.9,
see Migrating to Python 3.
For information about how to get the latest version of Python, see the
official Python documentation.
Setup a virtual environment
---------------------------
Once you have a supported version of Python installed, you should set
up your workspace by creating a virtual environment and activate it:
$ python -m venv .venv
...
$ source .venv/bin/activate
This provides an isolated space for your installation that will avoid
unexpected interactions with packages installed at the system level.
Skipping this step may result in unexpected dependency conflicts or
failures with other tools installed on your system.
Install Boto3
-------------
Install the latest Boto3 release via **pip**:
pip install boto3
If your project requires a specific version of Boto3, or has
compatibility concerns with certain versions, you may provide
constraints when installing:
# Install Boto3 version 1.0 specifically
pip install boto3==1.0.0
# Make sure Boto3 is no older than version 1.15.0
pip install boto3>=1.15.0
# Avoid versions of Boto3 newer than version 1.15.3
pip install boto3<=1.15.3
Note:
The latest development version of Boto3 is on GitHub.
Using the AWS Common Runtime (CRT)
----------------------------------
In addition to the default install of Boto3, you can choose to include
the new AWS Common Runtime (CRT). The AWS CRT is a collection of
modular packages that serve as a new foundation for AWS SDKs. Each
library provides better performance and minimal footprint for the
functional area it implements. Using the CRT, SDKs can share the same
base code when possible, improving consistency and throughput
optimizations across AWS SDKs.
When the AWS CRT is included, Boto3 uses it to incorporate features
not otherwise available in the AWS SDK for Python.
You'll find it used in features like:
* Amazon S3 Multi-Region Access Points
* Amazon S3 Object Integrity
* Amazon EventBridge Global Endpoints
However, Boto3 doesn't use the AWS CRT by default but you can opt into
using it by specifying the "crt" extra feature when installing Boto3:
pip install boto3[crt]
To revert to the non-CRT version of Boto3, use this command:
pip uninstall awscrt
If you need to re-enable CRT, reinstall "boto3[crt]" to ensure you
get a compatible version of "awscrt":
pip install boto3[crt]
Configuration
=============
Before using Boto3, you need to set up authentication credentials for
your AWS account using either the IAM Console or the AWS CLI. You can
either choose an existing user or create a new one.
For instructions about how to create a user using the IAM Console, see
Creating IAM users. Once the user has been created, see Managing
access keys to learn how to create and retrieve the keys used to
authenticate the user.
If you have the AWS CLI installed, then you can use the **aws
configure** command to configure your credentials file:
aws configure
Alternatively, you can create the credentials file yourself. By
default, its location is "~/.aws/credentials". At a minimum, the
credentials file should specify the access key and secret access key.
In this example, the key and secret key for the account are specified
in the "default" profile:
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
You may also want to add a default region to the AWS configuration
file, which is located by default at "~/.aws/config":
[default]
region=us-east-1
Alternatively, you can pass a "region_name" when creating clients and
resources.
You have now configured credentials for the default profile as well as
a default region to use when creating connections. See Configuration
for in-depth configuration sources and options.
Using Boto3
===========
To use Boto3, you must first import it and indicate which service or
services you're going to use:
import boto3
# Let's use Amazon S3
s3 = boto3.resource('s3')
Now that you have an "s3" resource, you can make send requests to the
service. The following code uses the "buckets" collection to print out
all bucket names:
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
You can also upload and download binary data. For example, the
following uploads a new file to S3, assuming that the bucket "amzn-s3
-demo-bucket" already exists:
# Upload a new file
with open('test.jpg', 'rb') as data:
s3.Bucket('amzn-s3-demo-bucket').put_object(Key='test.jpg', Body=data)
Resources and Collections are covered in more detail in the following
sections.
What's new
**********
Boto3 is a ground-up rewrite of Boto. It uses a data-driven approach
to generate classes at runtime from JSON description files that are
shared between SDKs in various languages. This includes descriptions
for a high level, object oriented interface similar to those available
in previous versions of Boto.
Because Boto3 is generated from these shared JSON files, we get fast
updates to the latest services and features and a consistent API
across services. Community contributions to JSON description files in
other SDKs also benefit Boto3, just as contributions to Boto3 benefit
the other SDKs.
Major features
==============
Boto3 consists of the following major features:
* **Resources**: a high level, object oriented interface
* **Collections**: a tool to iterate and manipulate groups of
resources
* **Clients**: low level service connections
* **Paginators**: automatic paging of responses
* **Waiters**: a way to block until a certain state has been reached
Along with these major features, Boto3 also provides *sessions* and
per-session *credentials* & *configuration*, as well as basic
components like *authentication*, *parameter* & *response* handling,
an *event system* for customizations and logic to *retry* failed
requests.
Botocore
--------
Boto3 is built atop of a library called Botocore, which is shared by
the AWS CLI. Botocore provides the low level clients, session, and
credential & configuration data. Boto3 builds on top of Botocore by
providing its own session, resources and collections.
Working with security groups in Amazon EC2
******************************************
This Python example shows you how to:
* Get information about your security groups
* Create a security group to access an Amazon EC2 instance
* Delete an existing security group
The scenario
============
An Amazon EC2 security group acts as a virtual firewall that controls
the traffic for one or more instances. You add rules to each security
group to allow traffic to or from its associated instances. You can
modify the rules for a security group at any time; the new rules are
automatically applied to all instances that are associated with the
security group.
In this example, Python code is used to perform several Amazon EC2
operations involving security groups. The code uses the AWS SDK for
Python to manage IAM access keys using these methods of the EC2 client
class:
* describe_security_groups.
* authorize_security_group_ingress.
* create_security_group.
* delete_security_group.
For more information about the Amazon EC2 security groups, see Amazon
EC2 Amazon Security Groups for Linux Instances in the *Amazon EC2 User
Guide for Linux Instances* or Amazon EC2 Security Groups for Windows
Instances in the *Amazon EC2 User Guide for Windows Instances*.
All the example code for the Amazon Web Services (AWS) SDK for Python
is available here on GitHub.
Prerequisite tasks
==================
To set up and run this example, you must first configure your AWS
credentials, as described in Quickstart.
Describe security groups
========================
Describe one or more of your security groups.
A security group is for use with instances either in the EC2-Classic
platform or in a specific VPC. For more information, see Amazon EC2
Security Groups in the *Amazon Elastic Compute Cloud User Guide* and
Security Groups for Your VPC in the *Amazon Virtual Private Cloud User
Guide*.
Warning:
We are retiring EC2-Classic on August 15, 2022. We recommend that
you migrate from EC2-Classic to a VPC. For more information, see
*Migrate from EC2-Classic to a VPC* in the Amazon EC2 User Guide for
Linux Instances or the Amazon EC2 User Guide for Windows Users. Also
see the blog post EC2-Classic Networking is Retiring – Here's How to
Prepare.
The example below shows how to:
* Describe a Security Group using describe_security_groups.
Example
-------
import boto3
from botocore.exceptions import ClientError
ec2 = boto3.client('ec2')
try:
response = ec2.describe_security_groups(GroupIds=['SECURITY_GROUP_ID'])
print(response)
except ClientError as e:
print(e)
Create a security group and rules
=================================
* Create a security group.
* Add one or more ingress rules to a security group.
Rule changes are propagated to instances within the security group
as quickly as possible. However, a small delay might occur.
The example below shows how to:
* Create a Security Group using create_security_group.
* Add an ingress rule to a security group using
authorize_security_group_ingress.
Example
-------
import boto3
from botocore.exceptions import ClientError
ec2 = boto3.client('ec2')
response = ec2.describe_vpcs()
vpc_id = response.get('Vpcs', [{}])[0].get('VpcId', '')
try:
response = ec2.create_security_group(GroupName='SECURITY_GROUP_NAME',
Description='DESCRIPTION',
VpcId=vpc_id)
security_group_id = response['GroupId']
print('Security Group Created %s in vpc %s.' % (security_group_id, vpc_id))
data = ec2.authorize_security_group_ingress(
GroupId=security_group_id,
IpPermissions=[
{'IpProtocol': 'tcp',
'FromPort': 80,
'ToPort': 80,
'IpRanges': [{'CidrIp': '0.0.0.0/0'}]},
{'IpProtocol': 'tcp',
'FromPort': 22,
'ToPort': 22,
'IpRanges': [{'CidrIp': '0.0.0.0/0'}]}
])
print('Ingress Successfully Set %s' % data)
except ClientError as e:
print(e)
Delete a security group
=======================
If you attempt to delete a security group that is associated with an
instance, or is referenced by another security group, the operation
fails with "InvalidGroup.InUse" in EC2-Classic or
"DependencyViolation" in EC2-VPC.
Warning:
We are retiring EC2-Classic on August 15, 2022. We recommend that
you migrate from EC2-Classic to a VPC. For more information, see
*Migrate from EC2-Classic to a VPC* in the Amazon EC2 User Guide for
Linux Instances or the Amazon EC2 User Guide for Windows Users. Also
see the blog post EC2-Classic Networking is Retiring – Here's How to
Prepare.
The example below shows how to:
* Delete a security group using delete_security_group.
Example
-------
import boto3
from botocore.exceptions import ClientError
# Create EC2 client
ec2 = boto3.client('ec2')
# Delete security group
try:
response = ec2.delete_security_group(GroupId='SECURITY_GROUP_ID')
print('Security Group Deleted')
except ClientError as e:
print(e)
Resources
*********
Overview
========
Note:
The AWS Python SDK team does not intend to add new features to the
resources interface in boto3. Existing interfaces will continue to
operate during boto3's lifecycle. Customers can find access to newer
service features through the client interface.
Resources represent an object-oriented interface to Amazon Web
Services (AWS). They provide a higher-level abstraction than the raw,
low-level calls made by service clients. To use resources, you invoke
the "resource()" method of a "Session" and pass in a service name:
# Get resources from the default session
sqs = boto3.resource('sqs')
s3 = boto3.resource('s3')
Every resource instance has a number of attributes and methods. These
can conceptually be split up into identifiers, attributes, actions,
references, sub-resources, and collections. Each of these is described
in further detail below and in the following section.
Resources themselves can also be conceptually split into service
resources (like "sqs", "s3", "ec2", etc) and individual resources
(like "sqs.Queue" or "s3.Bucket"). Service resources *do not* have
identifiers or attributes. The two share the same components
otherwise.
Identifiers and attributes
==========================
An identifier is a unique value that is used to call actions on the
resource. Resources **must** have at least one identifier, except for
the top-level service resources (e.g. "sqs" or "s3"). An identifier is
set at instance creation-time, and failing to provide all necessary
identifiers during instantiation will result in an exception. Examples
of identifiers:
# SQS Queue (url is an identifier)
queue = sqs.Queue(url='http://...')
print(queue.url)
# S3 Object (bucket_name and key are identifiers)
obj = s3.Object(bucket_name='amzn-s3-demo-bucket', key='test.py')
print(obj.bucket_name)
print(obj.key)
# Raises exception, missing identifier: key!
obj = s3.Object(bucket_name='amzn-s3-demo-bucket')
Identifiers may also be passed as positional arguments:
# SQS Queue
queue = sqs.Queue('http://...')
# S3 Object
obj = s3.Object('boto3', 'test.py')
# Raises exception, missing key!
obj = s3.Object('boto3')
Identifiers also play a role in resource instance equality. For two
instances of a resource to be considered equal, their identifiers must
be equal:
>>> bucket1 = s3.Bucket('amzn-s3-demo-bucket1')
>>> bucket2 = s3.Bucket('amzn-s3-demo-bucket1')
>>> bucket3 = s3.Bucket('amzn-s3-demo-bucket3')
>>> bucket1 == bucket2
True
>>> bucket1 == bucket3
False
Note:
Only identifiers are taken into account for instance equality.
Region, account ID and other data members are not considered. When
using temporary credentials or multiple regions in your code please
keep this in mind.
Resources may also have attributes, which are *lazy-loaded* properties
on the instance. They may be set at creation time from the response of
an action on another resource, or they may be set when accessed or via
an explicit call to the "load" or "reload" action. Examples of
attributes:
# SQS Message
message.body
# S3 Object
obj.last_modified
obj.e_tag
Warning:
Attributes may incur a load action when first accessed. If latency
is a concern, then manually calling "load" will allow you to control
exactly when the load action (and thus latency) is invoked. The
documentation for each resource explicitly lists its
attributes.Additionally, attributes may be reloaded after an action
has been performed on the resource. For example, if the
"last_modified" attribute of an S3 object is loaded and then a "put"
action is called, then the next time you access "last_modified" it
will reload the object's metadata.
Actions
=======
An action is a method which makes a call to the service. Actions may
return a low-level response, a new resource instance or a list of new
resource instances. Actions automatically set the resource identifiers
as parameters, but allow you to pass additional parameters via keyword
arguments. Examples of actions:
# SQS Queue
messages = queue.receive_messages()
# SQS Message
for message in messages:
message.delete()
# S3 Object
obj = s3.Object(bucket_name='amzn-s3-demo-bucket', key='test.py')
response = obj.get()
data = response['Body'].read()
Examples of sending additional parameters:
# SQS Service
queue = sqs.get_queue_by_name(QueueName='test')
# SQS Queue
queue.send_message(MessageBody='hello')
Note:
Parameters **must** be passed as keyword arguments. They will not
work as positional arguments.
References
==========
A reference is an attribute which may be "None" or a related resource
instance. The resource instance does not share identifiers with its
reference resource, that is, it is not a strict parent to child
relationship. In relational terms, these can be considered many-to-one
or one-to-one. Examples of references:
# EC2 Instance
instance.subnet
instance.vpc
In the above example, an EC2 instance may have exactly one associated
subnet, and may have exactly one associated VPC. The subnet does not
require the instance ID to exist, hence it is not a parent to child
relationship.
Sub-resources
=============
A sub-resource is similar to a reference, but is a related class
rather than an instance. Sub-resources, when instantiated, share
identifiers with their parent. It is a strict parent-child
relationship. In relational terms, these can be considered one-to-
many. Examples of sub-resources:
# SQS
queue = sqs.Queue(url='...')
message = queue.Message(receipt_handle='...')
print(queue.url == message.queue_url)
print(message.receipt_handle)
# S3
obj = bucket.Object(key='new_file.txt')
print(obj.bucket_name)
print(obj.key)
Because an SQS message cannot exist without a queue, and an S3 object
cannot exist without a bucket, these are parent to child
relationships.
Waiters
=======
A waiter is similar to an action. A waiter will poll the status of a
resource and suspend execution until the resource reaches the state
that is being polled for or a failure occurs while polling. Waiters
automatically set the resource identifiers as parameters, but allow
you to pass additional parameters via keyword arguments. Examples of
waiters include:
# S3: Wait for a bucket to exist.
bucket.wait_until_exists()
# EC2: Wait for an instance to reach the running state.
instance.wait_until_running()
Multithreading or multiprocessing with resources
================================================
Resource instances are **not** thread safe and should not be shared
across threads or processes. These special classes contain additional
meta data that cannot be shared. It's recommended to create a new
Resource for each thread or process:
import boto3
import boto3.session
import threading
class MyTask(threading.Thread):
def run(self):
# Here we create a new session per thread
session = boto3.session.Session()
# Next, we create a resource client using our thread's session object
s3 = session.resource('s3')
# Put your thread-safe code here
In the example above, each thread would have its own Boto3 session and
its own instance of the S3 resource. This is a good idea because
resources contain shared data when loaded and calling actions,
accessing properties, or manually loading or reloading the resource
can modify this data.
Session reference
*****************
class boto3.session.Session(aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None, region_name=None, botocore_session=None, profile_name=None, aws_account_id=None)
A session stores configuration state and allows you to create
service clients and resources.
Parameters:
* **aws_access_key_id** (*string*) -- AWS access key ID
* **aws_secret_access_key** (*string*) -- AWS secret access key
* **aws_session_token** (*string*) -- AWS temporary session
token
* **region_name** (*string*) -- Default region when creating new
connections
* **botocore_session** (*botocore.session.Session*) -- Use this
Botocore session instead of creating a new default one.
* **profile_name** (*string*) -- The name of a profile to use.
If not given, then the default profile is used.
* **aws_account_id** (*string*) -- AWS account ID
property available_profiles
The profiles available to the session credentials
client(service_name, region_name=None, api_version=None, use_ssl=True, verify=None, endpoint_url=None, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None, config=None, aws_account_id=None)
Create a low-level service client by name.
Parameters:
* **service_name** (*string*) -- The name of a service, e.g.
's3' or 'ec2'. You can get a list of available services via
"get_available_services()".
* **region_name** (*string*) -- The name of the region
associated with the client. A client is associated with a
single region.
* **api_version** (*string*) -- The API version to use. By
default, botocore will use the latest API version when
creating a client. You only need to specify this parameter
if you want to use a previous API version of the client.
* **use_ssl** (*boolean*) -- Whether or not to use SSL. By
default, SSL is used. Note that not all services support
non-ssl connections.
* **verify** (*boolean/string*) --
Whether or not to verify SSL certificates. By default SSL
certificates are verified. You can provide the following
values:
* False - do not validate SSL certificates. SSL will still
be used (unless use_ssl is False), but SSL certificates
will not be verified.
* path/to/cert/bundle.pem - A filename of the CA cert
bundle to uses. You can specify this argument if you
want to use a different CA cert bundle than the one used
by botocore.
* **endpoint_url** (*string*) -- The complete URL to use for
the constructed client. Normally, botocore will
automatically construct the appropriate URL to use when
communicating with a service. You can specify a complete
URL (including the "http/https" scheme) to override this
behavior. If this value is provided, then "use_ssl" is
ignored.
* **aws_access_key_id** (*string*) -- The access key to use
when creating the client. This is entirely optional, and
if not provided, the credentials configured for the session
will automatically be used. You only need to provide this
argument if you want to override the credentials used for
this specific client.
* **aws_secret_access_key** (*string*) -- The secret key to
use when creating the client. Same semantics as
aws_access_key_id above.
* **aws_session_token** (*string*) -- The session token to
use when creating the client. Same semantics as
aws_access_key_id above.
* **config** (*botocore.client.Config*) -- Advanced client
configuration options. If region_name is specified in the
client config, its value will take precedence over
environment variables and configuration values, but not
over a region_name value passed explicitly to the method.
See botocore config documentation for more details.
* **aws_account_id** (*string*) -- The account id to use when
creating the client. Same semantics as aws_access_key_id
above.
Returns:
Service client instance
property events
The event emitter for a session
get_available_partitions()
Lists the available partitions
Return type:
list
Returns:
Returns a list of partition names (e.g., ["aws", "aws-cn"])
get_available_regions(service_name, partition_name='aws', allow_non_regional=False)
Lists the region and endpoint names of a particular partition.
The list of regions returned by this method are regions that are
explicitly known by the client to exist and is not
comprehensive. A region not returned in this list may still be
available for the provided service.
Parameters:
* **service_name** (*string*) -- Name of a service to list
endpoint for (e.g., s3).
* **partition_name** (*string*) -- Name of the partition to
limit endpoints to. (e.g., aws for the public AWS
endpoints, aws-cn for AWS China endpoints, aws-us-gov for
AWS GovCloud (US) Endpoints, etc.)
* **allow_non_regional** (*bool*) -- Set to True to include
endpoints that are not regional endpoints (e.g.,
s3-external-1, fips-us-gov-west-1, etc).
Returns:
Returns a list of endpoint names (e.g., ["us-east-1"]).
get_available_resources()
Get a list of available services that can be loaded as resource
clients via "Session.resource()".
Return type:
list
Returns:
List of service names
get_available_services()
Get a list of available services that can be loaded as low-level
clients via "Session.client()".
Return type:
list
Returns:
List of service names
get_credentials()
Return the "botocore.credentials.Credentials" object associated
with this session. If the credentials have not yet been loaded,
this will attempt to load them. If they have already been
loaded, this will return the cached credentials.
get_partition_for_region(region_name)
Lists the partition name of a particular region.
Parameters:
**region_name** (*string*) -- Name of the region to list
partition for (e.g., us-east-1).
Return type:
string
Returns:
Returns the respective partition name (e.g., aws).
property profile_name
The **read-only** profile name.
property region_name
The **read-only** region name.
resource(service_name, region_name=None, api_version=None, use_ssl=True, verify=None, endpoint_url=None, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None, config=None)
Create a resource service client by name.
Parameters:
* **service_name** (*string*) -- The name of a service, e.g.
's3' or 'ec2'. You can get a list of available services via
"get_available_resources()".
* **region_name** (*string*) -- The name of the region
associated with the client. A client is associated with a
single region.
* **api_version** (*string*) -- The API version to use. By
default, botocore will use the latest API version when
creating a client. You only need to specify this parameter
if you want to use a previous API version of the client.
* **use_ssl** (*boolean*) -- Whether or not to use SSL. By
default, SSL is used. Note that not all services support
non-ssl connections.
* **verify** (*boolean/string*) --
Whether or not to verify SSL certificates. By default SSL
certificates are verified. You can provide the following
values:
* False - do not validate SSL certificates. SSL will still
be used (unless use_ssl is False), but SSL certificates
will not be verified.
* path/to/cert/bundle.pem - A filename of the CA cert
bundle to uses. You can specify this argument if you
want to use a different CA cert bundle than the one used
by botocore.
* **endpoint_url** (*string*) -- The complete URL to use for
the constructed client. Normally, botocore will
automatically construct the appropriate URL to use when
communicating with a service. You can specify a complete
URL (including the "http/https" scheme) to override this
behavior. If this value is provided, then "use_ssl" is
ignored.
* **aws_access_key_id** (*string*) -- The access key to use
when creating the client. This is entirely optional, and
if not provided, the credentials configured for the session
will automatically be used. You only need to provide this
argument if you want to override the credentials used for
this specific client.
* **aws_secret_access_key** (*string*) -- The secret key to
use when creating the client. Same semantics as
aws_access_key_id above.
* **aws_session_token** (*string*) -- The session token to
use when creating the client. Same semantics as
aws_access_key_id above.
* **config** (*botocore.client.Config*) --
Advanced client configuration options. If region_name is
specified in the client config, its value will take
precedence over environment variables and configuration
values, but not over a region_name value passed explicitly
to the method. If user_agent_extra is specified in the
client config, it overrides the default user_agent_extra
provided by the resource API. See botocore config
documentation for more details.
Returns:
Subclass of "ServiceResource"
Boto3 reference
***************
boto3.client(*args, **kwargs)
Create a low-level service client by name using the default
session.
See "boto3.session.Session.client()".
boto3.resource(*args, **kwargs)
Create a resource service client by name using the default session.
See "boto3.session.Session.resource()".
boto3.set_stream_logger(name='boto3', level=10, format_string=None)
Add a stream handler for the given name and level to the logging
module. By default, this logs all boto3 messages to "stdout".
>>> import boto3
>>> boto3.set_stream_logger('boto3.resources', logging.INFO)
For debugging purposes a good choice is to set the stream logger to
"''" which is equivalent to saying "log everything".
Warning:
Be aware that when logging anything from "'botocore'" the full
wire trace will appear in your logs. If your payloads contain
sensitive data this should not be used in production.
Parameters:
* **name** (*string*) -- Log name
* **level** (*int*) -- Logging level, e.g. "logging.INFO"
* **format_string** (*str*) -- Log message format
boto3.setup_default_session(**kwargs)
Set up a default session, passing through any parameters to the
session constructor. There is no need to call this unless you wish
to pass custom parameters, because a default session will be
created for you.
Collections reference
*********************
class boto3.resources.collection.CollectionFactory
A factory to create new "CollectionManager" and
"ResourceCollection" subclasses from a "Collection" model. These
subclasses include methods to perform batch operations.
load_from_definition(resource_name, collection_model, service_context, event_emitter)
Loads a collection from a model, creating a new
"CollectionManager" subclass with the correct properties and
methods, named based on the service and resource name, e.g.
ec2.InstanceCollectionManager. It also creates a new
"ResourceCollection" subclass which is used by the new manager
class.
Parameters:
* **resource_name** (*string*) -- Name of the resource to
look up. For services, this should match the
"service_name".
* **service_context** ("ServiceContext") -- Context about the
AWS service
* **event_emitter** ("HierarchialEmitter") -- An event
emitter
Return type:
Subclass of "CollectionManager"
Returns:
The collection class.
class boto3.resources.collection.CollectionManager(collection_model, parent, factory, service_context)
A collection manager provides access to resource collection
instances, which can be iterated and filtered. The manager exposes
some convenience functions that are also found on resource
collections, such as "all()" and "filter()".
Get all items:
>>> for bucket in s3.buckets.all():
... print(bucket.name)
Get only some items via filtering:
>>> for queue in sqs.queues.filter(QueueNamePrefix='AWS'):
... print(queue.url)
Get whole pages of items:
>>> for page in s3.Bucket('boto3').objects.pages():
... for obj in page:
... print(obj.key)
A collection manager is not iterable. You **must** call one of the
methods that return a "ResourceCollection" before trying to
iterate, slice, or convert to a list.
See the Collections guide for a high-level overview of collections,
including when remote service requests are performed.
Parameters:
* **model** -- Collection model
* **parent** ("ServiceResource") -- The collection's parent
resource
* **factory** ("ResourceFactory") -- The resource factory to
create new resources
* **service_context** ("ServiceContext") -- Context about the
AWS service
all()
Get all items from the collection, optionally with a custom page
size and item count limit.
This method returns an iterable generator which yields
individual resource instances. Example use:
# Iterate through items
>>> for queue in sqs.queues.all():
... print(queue.url)
'https://url1'
'https://url2'
# Convert to list
>>> queues = list(sqs.queues.all())
>>> len(queues)
2
filter(**kwargs)
Get items from the collection, passing keyword arguments along
as parameters to the underlying service operation, which are
typically used to filter the results.
This method returns an iterable generator which yields
individual resource instances. Example use:
# Iterate through items
>>> for queue in sqs.queues.filter(Param='foo'):
... print(queue.url)
'https://url1'
'https://url2'
# Convert to list
>>> queues = list(sqs.queues.filter(Param='foo'))
>>> len(queues)
2
Return type:
"ResourceCollection"
iterator(**kwargs)
Get a resource collection iterator from this manager.
Return type:
"ResourceCollection"
Returns:
An iterable representing the collection of resources
limit(count)
Return at most this many resources.
>>> for bucket in s3.buckets.limit(5):
... print(bucket.name)
'bucket1'
'bucket2'
'bucket3'
'bucket4'
'bucket5'
Parameters:
**count** (*int*) -- Return no more than this many items
Return type:
"ResourceCollection"
page_size(count)
Fetch at most this many resources per service request.
>>> for obj in s3.Bucket('boto3').objects.page_size(100):
... print(obj.key)
Parameters:
**count** (*int*) -- Fetch this many items per request
Return type:
"ResourceCollection"
pages()
A generator which yields pages of resource instances after doing
the appropriate service operation calls and handling any
pagination on your behalf. Non-paginated calls will return a
single page of items.
Page size, item limit, and filter parameters are applied if they
have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
Return type:
list("ServiceResource")
Returns:
List of resource instances
class boto3.resources.collection.ResourceCollection(model, parent, handler, **kwargs)
Represents a collection of resources, which can be iterated
through, optionally with filtering. Collections automatically
handle pagination for you.
See Collections for a high-level overview of collections, including
when remote service requests are performed.
Parameters:
* **model** ("Collection") -- Collection model
* **parent** ("ServiceResource") -- The collection's parent
resource
* **handler** ("ResourceHandler") -- The resource response
handler used to create resource instances
all()
Get all items from the collection, optionally with a custom page
size and item count limit.
This method returns an iterable generator which yields
individual resource instances. Example use:
# Iterate through items
>>> for queue in sqs.queues.all():
... print(queue.url)
'https://url1'
'https://url2'
# Convert to list
>>> queues = list(sqs.queues.all())
>>> len(queues)
2
filter(**kwargs)
Get items from the collection, passing keyword arguments along
as parameters to the underlying service operation, which are
typically used to filter the results.
This method returns an iterable generator which yields
individual resource instances. Example use:
# Iterate through items
>>> for queue in sqs.queues.filter(Param='foo'):
... print(queue.url)
'https://url1'
'https://url2'
# Convert to list
>>> queues = list(sqs.queues.filter(Param='foo'))
>>> len(queues)
2
Return type:
"ResourceCollection"
limit(count)
Return at most this many resources.
>>> for bucket in s3.buckets.limit(5):
... print(bucket.name)
'bucket1'
'bucket2'
'bucket3'
'bucket4'
'bucket5'
Parameters:
**count** (*int*) -- Return no more than this many items
Return type:
"ResourceCollection"
page_size(count)
Fetch at most this many resources per service request.
>>> for obj in s3.Bucket('boto3').objects.page_size(100):
... print(obj.key)
Parameters:
**count** (*int*) -- Fetch this many items per request
Return type:
"ResourceCollection"
pages()
A generator which yields pages of resource instances after doing
the appropriate service operation calls and handling any
pagination on your behalf. Non-paginated calls will return a
single page of items.
Page size, item limit, and filter parameters are applied if they
have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
Return type:
list("ServiceResource")
Returns:
List of resource instances
Resources reference
*******************
Resource model
==============
The models defined in this file represent the resource JSON
description format and provide a layer of abstraction from the raw
JSON. The advantages of this are:
* Pythonic interface (e.g. "action.request.operation")
* Consumers need not change for minor JSON changes (e.g. renamed
field)
These models are used both by the resource factory to generate
resource classes as well as by the documentation generator.
class boto3.resources.model.Action(name, definition, resource_defs)
A service operation action.
Parameters:
* **name** (*string*) -- The name of the action
* **definition** (*dict*) -- The JSON definition
* **resource_defs** (*dict*) -- All resources defined in the
service
name
("string") The name of the action
path
("string") The JMESPath search path or "None"
request
("Request") This action's request or "None"
resource
("ResponseResource") This action's resource or "None"
class boto3.resources.model.Collection(name, definition, resource_defs)
A group of resources. See "Action".
Parameters:
* **name** (*string*) -- The name of the collection
* **definition** (*dict*) -- The JSON definition
* **resource_defs** (*dict*) -- All resources defined in the
service
property batch_actions
Get a list of batch actions supported by the resource type
contained in this action. This is a shortcut for accessing the
same information through the resource model.
Return type:
list("Action")
name
("string") The name of the action
path
("string") The JMESPath search path or "None"
request
("Request") This action's request or "None"
resource
("ResponseResource") This action's resource or "None"
class boto3.resources.model.DefinitionWithParams(definition)
An item which has parameters exposed via the "params" property. A
request has an operation and parameters, while a waiter has a name,
a low-level waiter name and parameters.
Parameters:
**definition** (*dict*) -- The JSON definition
property params
Get a list of auto-filled parameters for this request.
Type:
list("Parameter")
class boto3.resources.model.Identifier(name, member_name=None)
A resource identifier, given by its name.
Parameters:
**name** (*string*) -- The name of the identifier
name
("string") The name of the identifier
class boto3.resources.model.Parameter(target, source, name=None, path=None, value=None, **kwargs)
An auto-filled parameter which has a source and target. For
example, the "QueueUrl" may be auto-filled from a resource's "url"
identifier when making calls to "queue.receive_messages".
Parameters:
* **target** (*string*) -- The destination parameter name, e.g.
"QueueUrl"
* **source_type** (*string*) -- Where the source is defined.
* **source** (*string*) -- The source name, e.g. "Url"
name
("string") The name of the source, if given
path
("string") The JMESPath query of the source
source
("string") Where the source is defined
target
("string") The destination parameter name
value
("string|int|float|bool") The source constant value
class boto3.resources.model.Request(definition)
A service operation action request.
Parameters:
**definition** (*dict*) -- The JSON definition
operation
("string") The name of the low-level service operation
property params
Get a list of auto-filled parameters for this request.
Type:
list("Parameter")
class boto3.resources.model.ResourceModel(name, definition, resource_defs)
A model representing a resource, defined via a JSON description
format. A resource has identifiers, attributes, actions, sub-
resources, references and collections. For more information on
resources, see Resources.
Parameters:
* **name** (*string*) -- The name of this resource, e.g. "sqs"
or "Queue"
* **definition** (*dict*) -- The JSON definition
* **resource_defs** (*dict*) -- All resources defined in the
service
property actions
Get a list of actions for this resource.
Type:
list("Action")
property batch_actions
Get a list of batch actions for this resource.
Type:
list("Action")
property collections
Get a list of collections for this resource.
Type:
list("Collection")
get_attributes(shape)
Get a dictionary of attribute names to original name and shape
models that represent the attributes of this resource. Looks
like the following:
{
'some_name': ('SomeName', )
}
Parameters:
**shape** (*botocore.model.Shape*) -- The underlying shape
for this resource.
Return type:
dict
Returns:
Mapping of resource attributes.
property identifiers
Get a list of resource identifiers.
Type:
list("Identifier")
property load
Get the load action for this resource, if it is defined.
Type:
"Action" or "None"
load_rename_map(shape=None)
Load a name translation map given a shape. This will set up
renamed values for any collisions, e.g. if the shape, an action,
and a subresource all are all named "foo" then the resource will
have an action "foo", a subresource named "Foo" and a property
named "foo_attribute". This is the order of precedence, from
most important to least important:
* Load action (resource.load)
* Identifiers
* Actions
* Subresources
* References
* Collections
* Waiters
* Attributes (shape members)
Batch actions are only exposed on collections, so do not get
modified here. Subresources use upper camel casing, so are
unlikely to collide with anything but other subresources.
Creates a structure like this:
renames = {
('action', 'id'): 'id_action',
('collection', 'id'): 'id_collection',
('attribute', 'id'): 'id_attribute'
}
# Get the final name for an action named 'id'
name = renames.get(('action', 'id'), 'id')
Parameters:
**shape** (*botocore.model.Shape*) -- The underlying shape
for this resource.
name
("string") The name of this resource
property references
Get a list of reference resources.
Type:
list("Action")
shape
("string") The service shape name for this resource or "None"
property subresources
Get a list of sub-resources.
Type:
list("Action")
property waiters
Get a list of waiters for this resource.
Type:
list("Waiter")
class boto3.resources.model.ResponseResource(definition, resource_defs)
A resource response to create after performing an action.
Parameters:
* **definition** (*dict*) -- The JSON definition
* **resource_defs** (*dict*) -- All resources defined in the
service
property identifiers
A list of resource identifiers.
Type:
list("Identifier")
property model
Get the resource model for the response resource.
Type:
"ResourceModel"
path
("string") The JMESPath search query or "None"
type
("string") The name of the response resource type
class boto3.resources.model.Waiter(name, definition)
An event waiter specification.
Parameters:
* **name** (*string*) -- Name of the waiter
* **definition** (*dict*) -- The JSON definition
PREFIX = 'WaitUntil'
name
("string") The name of this waiter
property params
Get a list of auto-filled parameters for this request.
Type:
list("Parameter")
waiter_name
("string") The name of the underlying event waiter
Request parameters
==================
boto3.resources.params.build_param_structure(params, target, value, index=None)
This method provides a basic reverse JMESPath implementation that
lets you go from a JMESPath-like string to a possibly deeply nested
object. The "params" are mutated in-place, so subsequent calls can
modify the same element by its index.
>>> build_param_structure(params, 'test[0]', 1)
>>> print(params)
{'test': [1]}
>>> build_param_structure(params, 'foo.bar[0].baz', 'hello world')
>>> print(params)
{'test': [1], 'foo': {'bar': [{'baz': 'hello, world'}]}}
boto3.resources.params.create_request_parameters(parent, request_model, params=None, index=None)
Handle request parameters that can be filled in from identifiers,
resource data members or constants.
By passing "params", you can invoke this method multiple times and
build up a parameter dict over time, which is particularly useful
for reverse JMESPath expressions that append to lists.
Parameters:
* **parent** (*ServiceResource*) -- The resource instance to
which this action is attached.
* **request_model** ("Request") -- The action request model.
* **params** (*dict*) -- If set, then add to this existing dict.
It is both edited in-place and returned.
* **index** (*int*) -- The position of an item within a list
Return type:
dict
Returns:
Pre-filled parameters to be sent to the request operation.
boto3.resources.params.get_data_member(parent, path)
Get a data member from a parent using a JMESPath search query,
loading the parent if required. If the parent cannot be loaded and
no data is present then an exception is raised.
Parameters:
* **parent** (*ServiceResource*) -- The resource instance to
which contains data we are interested in.
* **path** (*string*) -- The JMESPath expression to query
Raises:
**ResourceLoadException** -- When no data is present and the
resource cannot be loaded.
Returns:
The queried data or "None".
Response handlers
=================
class boto3.resources.response.RawHandler(search_path)
A raw action response handler. This passed through the response
dictionary, optionally after performing a JMESPath search if one
has been defined for the action.
Parameters:
**search_path** (*string*) -- JMESPath expression to search in
the response
Return type:
dict
Returns:
Service response
class boto3.resources.response.ResourceHandler(search_path, factory, resource_model, service_context, operation_name=None)
Creates a new resource or list of new resources from the low-level
response based on the given response resource definition.
Parameters:
* **search_path** (*string*) -- JMESPath expression to search in
the response
* **factory** (*ResourceFactory*) -- The factory that created
the resource class to which this action is attached.
* **resource_model** ("ResponseResource") -- Response resource
model.
* **service_context** ("ServiceContext") -- Context about the
AWS service
* **operation_name** (*string*) -- Name of the underlying
service operation, if it exists.
Return type:
ServiceResource or list
Returns:
New resource instance(s).
handle_response_item(resource_cls, parent, identifiers, resource_data)
Handles the creation of a single response item by setting
parameters and creating the appropriate resource instance.
Parameters:
* **resource_cls** (*ServiceResource subclass*) -- The
resource class to instantiate.
* **parent** (*ServiceResource*) -- The resource instance to
which this action is attached.
* **identifiers** (*dict*) -- Map of identifier names to
value or values.
* **resource_data** (*dict** or **None*) -- Data for resource
attributes.
Return type:
ServiceResource
Returns:
New resource instance.
boto3.resources.response.all_not_none(iterable)
Return True if all elements of the iterable are not None (or if the
iterable is empty). This is like the built-in "all", except checks
against None, so 0 and False are allowable values.
boto3.resources.response.build_empty_response(search_path, operation_name, service_model)
Creates an appropriate empty response for the type that is
expected, based on the service model's shape type. For example, a
value that is normally a list would then return an empty list. A
structure would return an empty dict, and a number would return
None.
Parameters:
* **search_path** (*string*) -- JMESPath expression to search in
the response
* **operation_name** (*string*) -- Name of the underlying
service operation.
* **service_model** (*botocore.model.ServiceModel*) -- The
Botocore service model
Return type:
dict, list, or None
Returns:
An appropriate empty value
boto3.resources.response.build_identifiers(identifiers, parent, params=None, raw_response=None)
Builds a mapping of identifier names to values based on the
identifier source location, type, and target. Identifier values may
be scalars or lists depending on the source type and location.
Parameters:
* **identifiers** (*list*) -- List of "Parameter" definitions
* **parent** (*ServiceResource*) -- The resource instance to
which this action is attached.
* **params** (*dict*) -- Request parameters sent to the service.
* **raw_response** (*dict*) -- Low-level operation response.
Return type:
list
Returns:
An ordered list of "(name, value)" identifier tuples.
Resource actions
================
class boto3.resources.action.BatchAction(action_model, factory=None, service_context=None)
An action which operates on a batch of items in a collection,
typically a single page of results from the collection's underlying
service operation call. For example, this allows you to delete up
to 999 S3 objects in a single operation rather than calling
".delete()" on each one individually.
Parameters:
* **action_model** (*:py:class`~boto3.resources.model.Action`*)
-- The action model.
* **factory** (*ResourceFactory*) -- The factory that created
the resource class to which this action is attached.
* **service_context** ("ServiceContext") -- Context about the
AWS service
class boto3.resources.action.CustomModeledAction(action_name, action_model, function, event_emitter)
A custom, modeled action to inject into a resource.
Parameters:
* **action_name** (*str*) -- The name of the action to inject,
e.g. 'delete_tags'
* **action_model** (*dict*) -- A JSON definition of the action,
as if it were part of the resource model.
* **function** (*function*) -- The function to perform when the
action is called. The first argument should be 'self', which
will be the resource the function is to be called on.
* **event_emitter** ("botocore.hooks.BaseEventHooks") -- The
session event emitter.
inject(class_attributes, service_context, event_name, **kwargs)
class boto3.resources.action.ServiceAction(action_model, factory=None, service_context=None)
A class representing a callable action on a resource, for example
"sqs.get_queue_by_name(...)" or "s3.Bucket('foo').delete()". The
action may construct parameters from existing resource identifiers
and may return either a raw response or a new resource instance.
Parameters:
* **action_model** (*:py:class`~boto3.resources.model.Action`*)
-- The action model.
* **factory** (*ResourceFactory*) -- The factory that created
the resource class to which this action is attached.
* **service_context** ("ServiceContext") -- Context about the
AWS service
class boto3.resources.action.WaiterAction(waiter_model, waiter_resource_name)
A class representing a callable waiter action on a resource, for
example "s3.Bucket('foo').wait_until_bucket_exists()". The waiter
action may construct parameters from existing resource identifiers.
Parameters:
* **waiter_model** (*:py:class`~boto3.resources.model.Waiter`*)
-- The action waiter.
* **waiter_resource_name** (*string*) -- The name of the waiter
action for the resource. It usually begins with a
"wait_until_"
Resource base
=============
class boto3.resources.base.ResourceMeta(service_name, identifiers=None, client=None, data=None, resource_model=None)
An object containing metadata about a resource.
client
("BaseClient") Low-level Botocore client
copy()
Create a copy of this metadata object.
data
("dict") Loaded resource data attributes
identifiers
("list") List of identifier names
service_name
("string") The service name, e.g. 's3'
class boto3.resources.base.ServiceResource(*args, **kwargs)
A base class for resources.
Parameters:
**client** (*botocore.client*) -- A low-level Botocore client
instance
meta = None
Stores metadata about this resource instance, such as the
"service_name", the low-level "client" and any cached "data"
from when the instance was hydrated. For example:
# Get a low-level client from a resource instance
client = resource.meta.client
response = client.operation(Param='foo')
# Print the resource instance's service short name
print(resource.meta.service_name)
See "ResourceMeta" for more information.
Resource factory
================
class boto3.resources.factory.ResourceFactory(emitter)
A factory to create new "ServiceResource" classes from a
"ResourceModel". There are two types of lookups that can be done:
one on the service itself (e.g. an SQS resource) and another on
models contained within the service (e.g. an SQS Queue resource).
load_from_definition(resource_name, single_resource_json_definition, service_context)
Loads a resource from a model, creating a new "ServiceResource"
subclass with the correct properties and methods, named based on
the service and resource name, e.g. EC2.Instance.
Parameters:
* **resource_name** (*string*) -- Name of the resource to
look up. For services, this should match the
"service_name".
* **single_resource_json_definition** (*dict*) -- The loaded
json of a single service resource or resource definition.
* **service_context** ("ServiceContext") -- Context about the
AWS service
Return type:
Subclass of "ServiceResource"
Returns:
The service or resource class.
S3 customization reference
**************************
S3 transfers
============
Note:
All classes documented below are considered public and thus will not
be exposed to breaking changes. If a class from the
"boto3.s3.transfer" module is not documented below, it is considered
internal and users should be very cautious in directly using them
because breaking changes may be introduced from version to version
of the library. It is recommended to use the variants of the
transfer functions injected into the S3 client instead.
See also:
"S3.Client.upload_file()" "S3.Client.upload_fileobj()"
"S3.Client.download_file()" "S3.Client.download_fileobj()"
class boto3.s3.transfer.TransferConfig(multipart_threshold=8388608, max_concurrency=10, multipart_chunksize=8388608, num_download_attempts=5, max_io_queue=100, io_chunksize=262144, use_threads=True, max_bandwidth=None, preferred_transfer_client='auto')
Configuration object for managed S3 transfers
Parameters:
* **multipart_threshold** -- The transfer size threshold for
which multipart uploads, downloads, and copies will
automatically be triggered.
* **max_concurrency** -- The maximum number of threads that will
be making requests to perform a transfer. If "use_threads" is
set to "False", the value provided is ignored as the transfer
will only ever use the current thread.
* **multipart_chunksize** -- The partition size of each part for
a multipart transfer.
* **num_download_attempts** -- The number of download attempts
that will be retried upon errors with downloading an object in
S3. Note that these retries account for errors that occur when
streaming down the data from s3 (i.e. socket errors and read
timeouts that occur after receiving an OK response from s3).
Other retryable exceptions such as throttling errors and 5xx
errors are already retried by botocore (this default is 5).
This does not take into account the number of exceptions
retried by botocore.
* **max_io_queue** -- The maximum amount of read parts that can
be queued in memory to be written for a download. The size of
each of these read parts is at most the size of
"io_chunksize".
* **io_chunksize** -- The max size of each chunk in the io
queue. Currently, this is size used when "read" is called on
the downloaded stream as well.
* **use_threads** -- If True, threads will be used when
performing S3 transfers. If False, no threads will be used in
performing transfers; all logic will be run in the current
thread.
* **max_bandwidth** -- The maximum bandwidth that will be
consumed in uploading and downloading file content. The value
is an integer in terms of bytes per second.
* **preferred_transfer_client** --
String specifying preferred transfer client for transfer
operations.
Current supported settings are:
* auto (default) - Use the CRTTransferManager when calls
are made with supported environment and settings.
* classic - Only use the origin S3TransferManager with
requests. Disables possible CRT upgrade on requests.
ALIAS = {'max_concurrency': 'max_request_concurrency', 'max_io_queue': 'max_io_queue_size'}
class boto3.s3.transfer.S3Transfer(client=None, config=None, osutil=None, manager=None)
ALLOWED_DOWNLOAD_ARGS = ['ChecksumMode', 'VersionId', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'RequestPayer', 'ExpectedBucketOwner']
ALLOWED_UPLOAD_ARGS = ['ACL', 'CacheControl', 'ChecksumAlgorithm', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'ExpectedBucketOwner', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACP', 'Metadata', 'ObjectLockLegalHoldStatus', 'ObjectLockMode', 'ObjectLockRetainUntilDate', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId', 'SSEKMSEncryptionContext', 'Tagging', 'WebsiteRedirectLocation', 'ChecksumType', 'MpuObjectSize', 'ChecksumCRC32', 'ChecksumCRC32C', 'ChecksumCRC64NVME', 'ChecksumSHA1', 'ChecksumSHA256']
download_file(bucket, key, filename, extra_args=None, callback=None)
Download an S3 object to a file.
Variants have also been injected into S3 client, Bucket and
Object. You don't have to use S3Transfer.download_file()
directly.
See also:
"S3.Client.download_file()" "S3.Client.download_fileobj()"
upload_file(filename, bucket, key, callback=None, extra_args=None)
Upload a file to an S3 object.
Variants have also been injected into S3 client, Bucket and
Object. You don't have to use S3Transfer.upload_file() directly.
See also: "S3.Client.upload_file()" "S3.Client.upload_fileobj()"
DynamoDB customization reference
********************************
Valid DynamoDB types
====================
These are the valid item types to use with Boto3 Table Resource
("dynamodb.Table") and DynamoDB:
+------------------------------------------------+-------------------------------+
| Python Type | DynamoDB Type |
|================================================|===============================|
| string | String (S) |
+------------------------------------------------+-------------------------------+
| integer | Number (N) |
+------------------------------------------------+-------------------------------+
| "decimal.Decimal" | Number (N) |
+------------------------------------------------+-------------------------------+
| "boto3.dynamodb.types.Binary" | Binary (B) |
+------------------------------------------------+-------------------------------+
| boolean | Boolean (BOOL) |
+------------------------------------------------+-------------------------------+
| "None" | Null (NULL) |
+------------------------------------------------+-------------------------------+
| string set | String Set (SS) |
+------------------------------------------------+-------------------------------+
| integer set | Number Set (NS) |
+------------------------------------------------+-------------------------------+
| "decimal.Decimal" set | Number Set (NS) |
+------------------------------------------------+-------------------------------+
| "boto3.dynamodb.types.Binary" set | Binary Set (BS) |
+------------------------------------------------+-------------------------------+
| list | List (L) |
+------------------------------------------------+-------------------------------+
| dict | Map (M) |
+------------------------------------------------+-------------------------------+
Custom Boto3 types
==================
class boto3.dynamodb.types.Binary(value)
A class for representing Binary in dynamodb
Especially for Python 2, use this class to explicitly specify
binary data for item in DynamoDB. It is essentially a wrapper
around binary. Unicode and Python 3 string types are not allowed.
DynamoDB conditions
===================
class boto3.dynamodb.conditions.Key(name)
begins_with(value)
Creates a condition where the attribute begins with the value.
Parameters:
**value** -- The value that the attribute begins with.
between(low_value, high_value)
Creates a condition where the attribute is greater than or equal
to the low value and less than or equal to the high value.
Parameters:
* **low_value** -- The value that the attribute is greater
than or equal to.
* **high_value** -- The value that the attribute is less than
or equal to.
eq(value)
Creates a condition where the attribute is equal to the value.
Parameters:
**value** -- The value that the attribute is equal to.
gt(value)
Creates a condition where the attribute is greater than the
value.
Parameters:
**value** -- The value that the attribute is greater than.
gte(value)
Creates a condition where the attribute is greater than or equal
to
the value.
Parameters:
**value** -- The value that the attribute is greater than or
equal to.
lt(value)
Creates a condition where the attribute is less than the value.
Parameters:
**value** -- The value that the attribute is less than.
lte(value)
Creates a condition where the attribute is less than or equal to
the
value.
Parameters:
**value** -- The value that the attribute is less than or
equal to.
class boto3.dynamodb.conditions.Attr(name)
Represents an DynamoDB item's attribute.
attribute_type(value)
Creates a condition for the attribute type.
Parameters:
**value** -- The type of the attribute.
begins_with(value)
Creates a condition where the attribute begins with the value.
Parameters:
**value** -- The value that the attribute begins with.
between(low_value, high_value)
Creates a condition where the attribute is greater than or equal
to the low value and less than or equal to the high value.
Parameters:
* **low_value** -- The value that the attribute is greater
than or equal to.
* **high_value** -- The value that the attribute is less than
or equal to.
contains(value)
Creates a condition where the attribute contains the value.
Parameters:
**value** -- The value the attribute contains.
eq(value)
Creates a condition where the attribute is equal to the value.
Parameters:
**value** -- The value that the attribute is equal to.
exists()
Creates a condition where the attribute exists.
gt(value)
Creates a condition where the attribute is greater than the
value.
Parameters:
**value** -- The value that the attribute is greater than.
gte(value)
Creates a condition where the attribute is greater than or equal
to
the value.
Parameters:
**value** -- The value that the attribute is greater than or
equal to.
is_in(value)
Creates a condition where the attribute is in the value,
Parameters:
**value** (*list*) -- The value that the attribute is in.
lt(value)
Creates a condition where the attribute is less than the value.
Parameters:
**value** -- The value that the attribute is less than.
lte(value)
Creates a condition where the attribute is less than or equal to
the
value.
Parameters:
**value** -- The value that the attribute is less than or
equal to.
ne(value)
Creates a condition where the attribute is not equal to the
value
Parameters:
**value** -- The value that the attribute is not equal to.
not_exists()
Creates a condition where the attribute does not exist.
size()
Creates a condition for the attribute size.
Note another AttributeBase method must be called on the returned
size condition to be a valid DynamoDB condition.